00:00:00.000 Started by upstream project "autotest-nightly" build number 3889 00:00:00.000 originally caused by: 00:00:00.001 Started by upstream project "nightly-trigger" build number 3269 00:00:00.001 originally caused by: 00:00:00.001 Started by timer 00:00:00.001 Started by timer 00:00:00.200 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.201 The recommended git tool is: git 00:00:00.201 using credential 00000000-0000-0000-0000-000000000002 00:00:00.203 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/freebsd-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.225 Fetching changes from the remote Git repository 00:00:00.226 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.241 Using shallow fetch with depth 1 00:00:00.241 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.241 > git --version # timeout=10 00:00:00.252 > git --version # 'git version 2.39.2' 00:00:00.252 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.262 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.262 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.688 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.698 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.710 Checking out Revision 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d (FETCH_HEAD) 00:00:06.710 > git config core.sparsecheckout # timeout=10 00:00:06.720 > git read-tree -mu HEAD # timeout=10 00:00:06.736 > git checkout -f 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=5 00:00:06.755 Commit message: "inventory: add WCP3 to free inventory" 00:00:06.756 > git rev-list --no-walk 9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d # timeout=10 00:00:06.864 [Pipeline] Start of Pipeline 00:00:06.881 [Pipeline] library 00:00:06.884 Loading library shm_lib@master 00:00:06.884 Library shm_lib@master is cached. Copying from home. 00:00:06.899 [Pipeline] node 00:00:06.910 Running on VM-host-SM0 in /var/jenkins/workspace/freebsd-vg-autotest 00:00:06.911 [Pipeline] { 00:00:06.920 [Pipeline] catchError 00:00:06.921 [Pipeline] { 00:00:06.932 [Pipeline] wrap 00:00:06.940 [Pipeline] { 00:00:06.948 [Pipeline] stage 00:00:06.950 [Pipeline] { (Prologue) 00:00:06.968 [Pipeline] echo 00:00:06.969 Node: VM-host-SM0 00:00:06.975 [Pipeline] cleanWs 00:00:06.983 [WS-CLEANUP] Deleting project workspace... 00:00:06.983 [WS-CLEANUP] Deferred wipeout is used... 00:00:06.988 [WS-CLEANUP] done 00:00:07.230 [Pipeline] setCustomBuildProperty 00:00:07.294 [Pipeline] httpRequest 00:00:07.316 [Pipeline] echo 00:00:07.317 Sorcerer 10.211.164.101 is alive 00:00:07.325 [Pipeline] httpRequest 00:00:07.329 HttpMethod: GET 00:00:07.329 URL: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.329 Sending request to url: http://10.211.164.101/packages/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:07.343 Response Code: HTTP/1.1 200 OK 00:00:07.344 Success: Status code 200 is in the accepted range: 200,404 00:00:07.344 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:10.842 [Pipeline] sh 00:00:11.122 + tar --no-same-owner -xf jbp_9bf0dabeadcf84e29a3d5dbec2430e38aceadf8d.tar.gz 00:00:11.139 [Pipeline] httpRequest 00:00:11.159 [Pipeline] echo 00:00:11.161 Sorcerer 10.211.164.101 is alive 00:00:11.172 [Pipeline] httpRequest 00:00:11.176 HttpMethod: GET 00:00:11.176 URL: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:11.177 Sending request to url: http://10.211.164.101/packages/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:00:11.204 Response Code: HTTP/1.1 200 OK 00:00:11.204 Success: Status code 200 is in the accepted range: 200,404 00:00:11.205 Saving response body to /var/jenkins/workspace/freebsd-vg-autotest/spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:28.010 [Pipeline] sh 00:01:28.289 + tar --no-same-owner -xf spdk_719d03c6adf20011bb50ac4109e0be7741c0d1c5.tar.gz 00:01:31.586 [Pipeline] sh 00:01:31.870 + git -C spdk log --oneline -n5 00:01:31.870 719d03c6a sock/uring: only register net impl if supported 00:01:31.870 e64f085ad vbdev_lvol_ut: unify usage of dummy base bdev 00:01:31.870 9937c0160 lib/rdma: bind TRACE_BDEV_IO_START/DONE to OBJECT_NVMF_RDMA_IO 00:01:31.870 6c7c1f57e accel: add sequence outstanding stat 00:01:31.870 3bc8e6a26 accel: add utility to put task 00:01:31.885 [Pipeline] writeFile 00:01:31.896 [Pipeline] sh 00:01:32.206 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:32.217 [Pipeline] sh 00:01:32.495 + cat autorun-spdk.conf 00:01:32.495 SPDK_TEST_UNITTEST=1 00:01:32.495 SPDK_RUN_VALGRIND=0 00:01:32.495 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.495 SPDK_TEST_NVME=1 00:01:32.495 SPDK_TEST_BLOCKDEV=1 00:01:32.495 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.502 RUN_NIGHTLY=1 00:01:32.504 [Pipeline] } 00:01:32.522 [Pipeline] // stage 00:01:32.540 [Pipeline] stage 00:01:32.542 [Pipeline] { (Run VM) 00:01:32.558 [Pipeline] sh 00:01:32.837 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:32.837 + echo 'Start stage prepare_nvme.sh' 00:01:32.837 Start stage prepare_nvme.sh 00:01:32.837 + [[ -n 1 ]] 00:01:32.837 + disk_prefix=ex1 00:01:32.837 + [[ -n /var/jenkins/workspace/freebsd-vg-autotest ]] 00:01:32.837 + [[ -e /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf ]] 00:01:32.837 + source /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf 00:01:32.837 ++ SPDK_TEST_UNITTEST=1 00:01:32.837 ++ SPDK_RUN_VALGRIND=0 00:01:32.837 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:32.837 ++ SPDK_TEST_NVME=1 00:01:32.837 ++ SPDK_TEST_BLOCKDEV=1 00:01:32.837 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:32.837 ++ RUN_NIGHTLY=1 00:01:32.837 + cd /var/jenkins/workspace/freebsd-vg-autotest 00:01:32.837 + nvme_files=() 00:01:32.837 + declare -A nvme_files 00:01:32.837 + backend_dir=/var/lib/libvirt/images/backends 00:01:32.837 + nvme_files['nvme.img']=5G 00:01:32.837 + nvme_files['nvme-cmb.img']=5G 00:01:32.837 + nvme_files['nvme-multi0.img']=4G 00:01:32.837 + nvme_files['nvme-multi1.img']=4G 00:01:32.837 + nvme_files['nvme-multi2.img']=4G 00:01:32.837 + nvme_files['nvme-openstack.img']=8G 00:01:32.837 + nvme_files['nvme-zns.img']=5G 00:01:32.837 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:32.837 + (( SPDK_TEST_FTL == 1 )) 00:01:32.837 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:32.837 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:32.837 + for nvme in "${!nvme_files[@]}" 00:01:32.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi2.img -s 4G 00:01:32.837 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.837 + for nvme in "${!nvme_files[@]}" 00:01:32.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-cmb.img -s 5G 00:01:32.837 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.837 + for nvme in "${!nvme_files[@]}" 00:01:32.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-openstack.img -s 8G 00:01:32.837 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:32.837 + for nvme in "${!nvme_files[@]}" 00:01:32.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-zns.img -s 5G 00:01:32.837 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:32.837 + for nvme in "${!nvme_files[@]}" 00:01:32.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi1.img -s 4G 00:01:32.837 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.837 + for nvme in "${!nvme_files[@]}" 00:01:32.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme-multi0.img -s 4G 00:01:32.837 Formatting '/var/lib/libvirt/images/backends/ex1-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:32.837 + for nvme in "${!nvme_files[@]}" 00:01:32.837 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex1-nvme.img -s 5G 00:01:33.096 Formatting '/var/lib/libvirt/images/backends/ex1-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:33.096 ++ sudo grep -rl ex1-nvme.img /etc/libvirt/qemu 00:01:33.096 + echo 'End stage prepare_nvme.sh' 00:01:33.096 End stage prepare_nvme.sh 00:01:33.108 [Pipeline] sh 00:01:33.388 + DISTRO=freebsd14 CPUS=10 RAM=14336 jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:33.388 Setup: -n 10 -s 14336 -x http://proxy-dmz.intel.com:911 -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex1-nvme.img -H -a -v -f freebsd14 00:01:33.388 00:01:33.388 DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant 00:01:33.388 SPDK_DIR=/var/jenkins/workspace/freebsd-vg-autotest/spdk 00:01:33.388 VAGRANT_TARGET=/var/jenkins/workspace/freebsd-vg-autotest 00:01:33.388 HELP=0 00:01:33.388 DRY_RUN=0 00:01:33.388 NVME_FILE=/var/lib/libvirt/images/backends/ex1-nvme.img, 00:01:33.388 NVME_DISKS_TYPE=nvme, 00:01:33.388 NVME_AUTO_CREATE=0 00:01:33.388 NVME_DISKS_NAMESPACES=, 00:01:33.388 NVME_CMB=, 00:01:33.388 NVME_PMR=, 00:01:33.388 NVME_ZNS=, 00:01:33.388 NVME_MS=, 00:01:33.388 NVME_FDP=, 00:01:33.388 SPDK_VAGRANT_DISTRO=freebsd14 00:01:33.388 SPDK_VAGRANT_VMCPU=10 00:01:33.388 SPDK_VAGRANT_VMRAM=14336 00:01:33.388 SPDK_VAGRANT_PROVIDER=libvirt 00:01:33.388 SPDK_VAGRANT_HTTP_PROXY=http://proxy-dmz.intel.com:911 00:01:33.388 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:33.388 SPDK_OPENSTACK_NETWORK=0 00:01:33.388 VAGRANT_PACKAGE_BOX=0 00:01:33.389 VAGRANTFILE=/var/jenkins/workspace/freebsd-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:33.389 FORCE_DISTRO=true 00:01:33.389 VAGRANT_BOX_VERSION= 00:01:33.389 EXTRA_VAGRANTFILES= 00:01:33.389 NIC_MODEL=e1000 00:01:33.389 00:01:33.389 mkdir: created directory '/var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt' 00:01:33.389 /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt /var/jenkins/workspace/freebsd-vg-autotest 00:01:36.676 Bringing machine 'default' up with 'libvirt' provider... 00:01:36.935 ==> default: Creating image (snapshot of base box volume). 00:01:37.194 ==> default: Creating domain with the following settings... 00:01:37.194 ==> default: -- Name: freebsd14-14.0-RELEASE-1718332871-2294_default_1720990848_18c37b4f6a5c67d7d464 00:01:37.194 ==> default: -- Domain type: kvm 00:01:37.194 ==> default: -- Cpus: 10 00:01:37.194 ==> default: -- Feature: acpi 00:01:37.194 ==> default: -- Feature: apic 00:01:37.194 ==> default: -- Feature: pae 00:01:37.194 ==> default: -- Memory: 14336M 00:01:37.194 ==> default: -- Memory Backing: hugepages: 00:01:37.194 ==> default: -- Management MAC: 00:01:37.194 ==> default: -- Loader: 00:01:37.194 ==> default: -- Nvram: 00:01:37.194 ==> default: -- Base box: spdk/freebsd14 00:01:37.194 ==> default: -- Storage pool: default 00:01:37.194 ==> default: -- Image: /var/lib/libvirt/images/freebsd14-14.0-RELEASE-1718332871-2294_default_1720990848_18c37b4f6a5c67d7d464.img (32G) 00:01:37.194 ==> default: -- Volume Cache: default 00:01:37.194 ==> default: -- Kernel: 00:01:37.194 ==> default: -- Initrd: 00:01:37.194 ==> default: -- Graphics Type: vnc 00:01:37.194 ==> default: -- Graphics Port: -1 00:01:37.194 ==> default: -- Graphics IP: 127.0.0.1 00:01:37.194 ==> default: -- Graphics Password: Not defined 00:01:37.194 ==> default: -- Video Type: cirrus 00:01:37.194 ==> default: -- Video VRAM: 9216 00:01:37.194 ==> default: -- Sound Type: 00:01:37.194 ==> default: -- Keymap: en-us 00:01:37.194 ==> default: -- TPM Path: 00:01:37.194 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:37.194 ==> default: -- Command line args: 00:01:37.194 ==> default: -> value=-device, 00:01:37.194 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:37.194 ==> default: -> value=-drive, 00:01:37.194 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex1-nvme.img,if=none,id=nvme-0-drive0, 00:01:37.194 ==> default: -> value=-device, 00:01:37.194 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:37.194 ==> default: Creating shared folders metadata... 00:01:37.194 ==> default: Starting domain. 00:01:39.099 ==> default: Waiting for domain to get an IP address... 00:02:01.025 ==> default: Waiting for SSH to become available... 00:02:11.064 ==> default: Configuring and enabling network interfaces... 00:02:21.036 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:02:33.233 ==> default: Mounting SSHFS shared folder... 00:02:35.132 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt/output => /home/vagrant/spdk_repo/output 00:02:35.132 ==> default: Checking Mount.. 00:02:36.506 ==> default: Folder Successfully Mounted! 00:02:36.507 ==> default: Running provisioner: file... 00:02:37.440 default: ~/.gitconfig => .gitconfig 00:02:38.005 00:02:38.005 SUCCESS! 00:02:38.005 00:02:38.005 cd to /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt and type "vagrant ssh" to use. 00:02:38.005 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:02:38.005 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt" to destroy all trace of vm. 00:02:38.005 00:02:38.015 [Pipeline] } 00:02:38.034 [Pipeline] // stage 00:02:38.044 [Pipeline] dir 00:02:38.044 Running in /var/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt 00:02:38.046 [Pipeline] { 00:02:38.061 [Pipeline] catchError 00:02:38.063 [Pipeline] { 00:02:38.078 [Pipeline] sh 00:02:38.355 + vagrant ssh-config --host vagrant 00:02:38.355 + sed -ne /^Host/,$p 00:02:38.355 + tee ssh_conf 00:02:41.672 Host vagrant 00:02:41.672 HostName 192.168.121.61 00:02:41.672 User vagrant 00:02:41.672 Port 22 00:02:41.672 UserKnownHostsFile /dev/null 00:02:41.672 StrictHostKeyChecking no 00:02:41.672 PasswordAuthentication no 00:02:41.672 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-freebsd14/14.0-RELEASE-1718332871-2294/libvirt/freebsd14 00:02:41.672 IdentitiesOnly yes 00:02:41.672 LogLevel FATAL 00:02:41.672 ForwardAgent yes 00:02:41.672 ForwardX11 yes 00:02:41.672 00:02:41.687 [Pipeline] withEnv 00:02:41.690 [Pipeline] { 00:02:41.706 [Pipeline] sh 00:02:41.985 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant #!/bin/bash 00:02:41.985 source /etc/os-release 00:02:41.985 [[ -e /image.version ]] && img=$(< /image.version) 00:02:41.985 # Minimal, systemd-like check. 00:02:41.985 if [[ -e /.dockerenv ]]; then 00:02:41.985 # Clear garbage from the node's name: 00:02:41.985 # agt-er_autotest_547-896 -> autotest_547-896 00:02:41.985 # $HOSTNAME is the actual container id 00:02:41.985 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:02:41.985 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:02:41.985 # We can assume this is a mount from a host where container is running, 00:02:41.985 # so fetch its hostname to easily identify the target swarm worker. 00:02:41.985 container="$(< /etc/hostname) ($agent)" 00:02:41.985 else 00:02:41.985 # Fallback 00:02:41.985 container=$agent 00:02:41.985 fi 00:02:41.985 fi 00:02:41.985 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:02:41.985 00:02:41.994 [Pipeline] } 00:02:42.013 [Pipeline] // withEnv 00:02:42.022 [Pipeline] setCustomBuildProperty 00:02:42.037 [Pipeline] stage 00:02:42.039 [Pipeline] { (Tests) 00:02:42.056 [Pipeline] sh 00:02:42.332 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:02:42.345 [Pipeline] sh 00:02:42.621 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:02:42.635 [Pipeline] timeout 00:02:42.635 Timeout set to expire in 1 hr 30 min 00:02:42.637 [Pipeline] { 00:02:42.652 [Pipeline] sh 00:02:42.929 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant git -C spdk_repo/spdk reset --hard 00:02:43.495 HEAD is now at 719d03c6a sock/uring: only register net impl if supported 00:02:43.508 [Pipeline] sh 00:02:43.786 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant sudo chown vagrant:vagrant spdk_repo 00:02:43.801 [Pipeline] sh 00:02:44.078 + scp -F ssh_conf -r /var/jenkins/workspace/freebsd-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:02:44.095 [Pipeline] sh 00:02:44.370 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant CXX=/usr/bin/clang++ CC=/usr/bin/clang JOB_BASE_NAME=freebsd-vg-autotest ./autoruner.sh spdk_repo 00:02:44.370 ++ readlink -f spdk_repo 00:02:44.370 + DIR_ROOT=/home/vagrant/spdk_repo 00:02:44.370 + [[ -n /home/vagrant/spdk_repo ]] 00:02:44.370 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:02:44.370 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:02:44.370 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:02:44.371 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:02:44.371 + [[ -d /home/vagrant/spdk_repo/output ]] 00:02:44.371 + [[ freebsd-vg-autotest == pkgdep-* ]] 00:02:44.371 + cd /home/vagrant/spdk_repo 00:02:44.371 + source /etc/os-release 00:02:44.371 ++ NAME=FreeBSD 00:02:44.371 ++ VERSION=14.0-RELEASE 00:02:44.371 ++ VERSION_ID=14.0 00:02:44.371 ++ ID=freebsd 00:02:44.371 ++ ANSI_COLOR='0;31' 00:02:44.371 ++ PRETTY_NAME='FreeBSD 14.0-RELEASE' 00:02:44.371 ++ CPE_NAME=cpe:/o:freebsd:freebsd:14.0 00:02:44.371 ++ HOME_URL=https://FreeBSD.org/ 00:02:44.371 ++ BUG_REPORT_URL=https://bugs.FreeBSD.org/ 00:02:44.371 + uname -a 00:02:44.371 FreeBSD freebsd-cloud-1718332871-2294.local 14.0-RELEASE FreeBSD 14.0-RELEASE #0 releng/14.0-n265380-f9716eee8ab4: Fri Nov 10 05:57:23 UTC 2023 root@releng1.nyi.freebsd.org:/usr/obj/usr/src/amd64.amd64/sys/GENERIC amd64 00:02:44.371 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:02:44.629 Contigmem (not present) 00:02:44.629 Buffer Size: not set 00:02:44.629 Num Buffers: not set 00:02:44.629 00:02:44.629 00:02:44.629 Type BDF Vendor Device Driver 00:02:44.629 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:02:44.629 + rm -f /tmp/spdk-ld-path 00:02:44.629 + source autorun-spdk.conf 00:02:44.629 ++ SPDK_TEST_UNITTEST=1 00:02:44.629 ++ SPDK_RUN_VALGRIND=0 00:02:44.629 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:02:44.629 ++ SPDK_TEST_NVME=1 00:02:44.629 ++ SPDK_TEST_BLOCKDEV=1 00:02:44.629 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:44.629 ++ RUN_NIGHTLY=1 00:02:44.629 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:02:44.629 + [[ -n '' ]] 00:02:44.629 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:02:44.629 + for M in /var/spdk/build-*-manifest.txt 00:02:44.629 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:02:44.629 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:44.629 + for M in /var/spdk/build-*-manifest.txt 00:02:44.629 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:02:44.629 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:02:44.629 ++ uname 00:02:44.629 + [[ FreeBSD == \L\i\n\u\x ]] 00:02:44.629 + dmesg_pid=1298 00:02:44.629 + tail -F /var/log/messages 00:02:44.629 + [[ FreeBSD == FreeBSD ]] 00:02:44.629 + export LC_ALL=C LC_CTYPE=C 00:02:44.629 + LC_ALL=C 00:02:44.629 + LC_CTYPE=C 00:02:44.629 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:44.629 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:02:44.629 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:02:44.629 + [[ -x /usr/src/fio-static/fio ]] 00:02:44.629 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:02:44.629 + [[ ! -v VFIO_QEMU_BIN ]] 00:02:44.629 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:02:44.629 + vfios=(/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64) 00:02:44.629 + export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:44.629 + VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:02:44.629 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:02:44.629 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:02:44.629 Test configuration: 00:02:44.629 SPDK_TEST_UNITTEST=1 00:02:44.629 SPDK_RUN_VALGRIND=0 00:02:44.629 SPDK_RUN_FUNCTIONAL_TEST=1 00:02:44.629 SPDK_TEST_NVME=1 00:02:44.629 SPDK_TEST_BLOCKDEV=1 00:02:44.629 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:02:44.629 RUN_NIGHTLY=1 21:01:56 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:02:44.629 21:01:56 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:02:44.629 21:01:56 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:44.629 21:01:56 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:44.629 21:01:56 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:44.629 21:01:56 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:44.629 21:01:56 -- paths/export.sh@4 -- $ export PATH 00:02:44.629 21:01:56 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:02:44.629 21:01:56 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:02:44.629 21:01:56 -- common/autobuild_common.sh@444 -- $ date +%s 00:02:44.629 21:01:56 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720990916.XXXXXX 00:02:44.629 21:01:56 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720990916.XXXXXX.oJ7au9M5d5 00:02:44.629 21:01:56 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:02:44.629 21:01:56 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:02:44.629 21:01:56 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:02:44.629 21:01:56 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:02:44.629 21:01:56 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:02:44.629 21:01:56 -- common/autobuild_common.sh@460 -- $ get_config_params 00:02:44.629 21:01:56 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:02:44.629 21:01:56 -- common/autotest_common.sh@10 -- $ set +x 00:02:44.887 21:01:56 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:02:44.887 21:01:56 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:02:44.887 21:01:56 -- pm/common@17 -- $ local monitor 00:02:44.887 21:01:56 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:44.887 21:01:56 -- pm/common@25 -- $ sleep 1 00:02:44.887 21:01:56 -- pm/common@21 -- $ date +%s 00:02:44.887 21:01:56 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1720990916 00:02:44.887 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1720990916_collect-vmstat.pm.log 00:02:45.819 21:01:57 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:02:45.819 21:01:57 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:02:45.819 21:01:57 -- spdk/autobuild.sh@12 -- $ umask 022 00:02:45.819 21:01:57 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:02:45.819 21:01:57 -- spdk/autobuild.sh@16 -- $ date -u 00:02:45.819 Sun Jul 14 21:01:57 UTC 2024 00:02:45.819 21:01:57 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:02:45.819 v24.09-pre-202-g719d03c6a 00:02:45.819 21:01:57 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:02:45.819 21:01:57 -- spdk/autobuild.sh@23 -- $ '[' 0 -eq 1 ']' 00:02:45.819 21:01:57 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:02:45.819 21:01:57 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:02:45.819 21:01:57 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:02:45.819 21:01:57 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:02:45.819 21:01:57 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:02:45.819 21:01:57 -- spdk/autobuild.sh@57 -- $ [[ 1 -eq 1 ]] 00:02:45.819 21:01:57 -- spdk/autobuild.sh@58 -- $ unittest_build 00:02:45.819 21:01:57 -- common/autobuild_common.sh@420 -- $ run_test unittest_build _unittest_build 00:02:45.819 21:01:57 -- common/autotest_common.sh@1099 -- $ '[' 2 -le 1 ']' 00:02:45.819 21:01:57 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:02:45.819 21:01:57 -- common/autotest_common.sh@10 -- $ set +x 00:02:45.819 ************************************ 00:02:45.819 START TEST unittest_build 00:02:45.819 ************************************ 00:02:45.819 21:01:57 unittest_build -- common/autotest_common.sh@1123 -- $ _unittest_build 00:02:45.819 21:01:57 unittest_build -- common/autobuild_common.sh@411 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --without-shared 00:02:46.758 Notice: Vhost, rte_vhost library, virtio, and fuse 00:02:46.758 are only supported on Linux. Turning off default feature. 00:02:47.016 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:02:47.016 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:47.951 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:02:47.951 Using 'verbs' RDMA provider 00:03:00.409 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:03:08.541 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:03:08.541 Creating mk/config.mk...done. 00:03:08.541 Creating mk/cc.flags.mk...done. 00:03:08.541 Type 'gmake' to build. 00:03:08.541 21:02:20 unittest_build -- common/autobuild_common.sh@412 -- $ gmake -j10 00:03:08.799 gmake[1]: Nothing to be done for 'all'. 00:03:12.978 ps: stdin: not a terminal 00:03:17.159 The Meson build system 00:03:17.159 Version: 1.4.0 00:03:17.159 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:03:17.159 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:17.159 Build type: native build 00:03:17.159 Program cat found: YES (/bin/cat) 00:03:17.159 Project name: DPDK 00:03:17.159 Project version: 24.03.0 00:03:17.159 C compiler for the host machine: /usr/bin/clang (clang 16.0.6 "FreeBSD clang version 16.0.6 (https://github.com/llvm/llvm-project.git llvmorg-16.0.6-0-g7cbf1a259152)") 00:03:17.159 C linker for the host machine: /usr/bin/clang ld.lld 16.0.6 00:03:17.159 Host machine cpu family: x86_64 00:03:17.159 Host machine cpu: x86_64 00:03:17.159 Message: ## Building in Developer Mode ## 00:03:17.159 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:03:17.159 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:03:17.159 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:03:17.159 Program python3 found: YES (/usr/local/bin/python3.9) 00:03:17.159 Program cat found: YES (/bin/cat) 00:03:17.159 Compiler for C supports arguments -march=native: YES 00:03:17.159 Checking for size of "void *" : 8 00:03:17.159 Checking for size of "void *" : 8 (cached) 00:03:17.159 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:03:17.159 Library m found: YES 00:03:17.159 Library numa found: NO 00:03:17.159 Library fdt found: NO 00:03:17.159 Library execinfo found: YES 00:03:17.159 Has header "execinfo.h" : YES 00:03:17.159 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.2.0 00:03:17.159 Run-time dependency libarchive found: NO (tried pkgconfig) 00:03:17.159 Run-time dependency libbsd found: NO (tried pkgconfig) 00:03:17.159 Run-time dependency jansson found: NO (tried pkgconfig) 00:03:17.159 Run-time dependency openssl found: YES 3.0.13 00:03:17.159 Run-time dependency libpcap found: NO (tried pkgconfig) 00:03:17.159 Library pcap found: YES 00:03:17.159 Has header "pcap.h" with dependency -lpcap: YES 00:03:17.159 Compiler for C supports arguments -Wcast-qual: YES 00:03:17.159 Compiler for C supports arguments -Wdeprecated: YES 00:03:17.159 Compiler for C supports arguments -Wformat: YES 00:03:17.159 Compiler for C supports arguments -Wformat-nonliteral: YES 00:03:17.159 Compiler for C supports arguments -Wformat-security: YES 00:03:17.159 Compiler for C supports arguments -Wmissing-declarations: YES 00:03:17.159 Compiler for C supports arguments -Wmissing-prototypes: YES 00:03:17.159 Compiler for C supports arguments -Wnested-externs: YES 00:03:17.159 Compiler for C supports arguments -Wold-style-definition: YES 00:03:17.159 Compiler for C supports arguments -Wpointer-arith: YES 00:03:17.159 Compiler for C supports arguments -Wsign-compare: YES 00:03:17.159 Compiler for C supports arguments -Wstrict-prototypes: YES 00:03:17.159 Compiler for C supports arguments -Wundef: YES 00:03:17.159 Compiler for C supports arguments -Wwrite-strings: YES 00:03:17.159 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:03:17.159 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:03:17.159 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:03:17.159 Compiler for C supports arguments -mavx512f: YES 00:03:17.159 Checking if "AVX512 checking" compiles: YES 00:03:17.159 Fetching value of define "__SSE4_2__" : 1 00:03:17.159 Fetching value of define "__AES__" : 1 00:03:17.159 Fetching value of define "__AVX__" : 1 00:03:17.159 Fetching value of define "__AVX2__" : 1 00:03:17.159 Fetching value of define "__AVX512BW__" : (undefined) 00:03:17.159 Fetching value of define "__AVX512CD__" : (undefined) 00:03:17.159 Fetching value of define "__AVX512DQ__" : (undefined) 00:03:17.159 Fetching value of define "__AVX512F__" : (undefined) 00:03:17.159 Fetching value of define "__AVX512VL__" : (undefined) 00:03:17.159 Fetching value of define "__PCLMUL__" : 1 00:03:17.159 Fetching value of define "__RDRND__" : 1 00:03:17.159 Fetching value of define "__RDSEED__" : 1 00:03:17.159 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:03:17.159 Fetching value of define "__znver1__" : (undefined) 00:03:17.159 Fetching value of define "__znver2__" : (undefined) 00:03:17.159 Fetching value of define "__znver3__" : (undefined) 00:03:17.159 Fetching value of define "__znver4__" : (undefined) 00:03:17.159 Compiler for C supports arguments -Wno-format-truncation: NO 00:03:17.159 Message: lib/log: Defining dependency "log" 00:03:17.159 Message: lib/kvargs: Defining dependency "kvargs" 00:03:17.159 Message: lib/telemetry: Defining dependency "telemetry" 00:03:17.159 Checking if "Detect argument count for CPU_OR" compiles: YES 00:03:17.159 Checking for function "getentropy" : YES 00:03:17.159 Message: lib/eal: Defining dependency "eal" 00:03:17.159 Message: lib/ring: Defining dependency "ring" 00:03:17.159 Message: lib/rcu: Defining dependency "rcu" 00:03:17.159 Message: lib/mempool: Defining dependency "mempool" 00:03:17.159 Message: lib/mbuf: Defining dependency "mbuf" 00:03:17.159 Fetching value of define "__PCLMUL__" : 1 (cached) 00:03:17.159 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:03:17.159 Compiler for C supports arguments -mpclmul: YES 00:03:17.159 Compiler for C supports arguments -maes: YES 00:03:17.159 Compiler for C supports arguments -mavx512f: YES (cached) 00:03:17.159 Compiler for C supports arguments -mavx512bw: YES 00:03:17.159 Compiler for C supports arguments -mavx512dq: YES 00:03:17.159 Compiler for C supports arguments -mavx512vl: YES 00:03:17.159 Compiler for C supports arguments -mvpclmulqdq: YES 00:03:17.159 Compiler for C supports arguments -mavx2: YES 00:03:17.159 Compiler for C supports arguments -mavx: YES 00:03:17.159 Message: lib/net: Defining dependency "net" 00:03:17.159 Message: lib/meter: Defining dependency "meter" 00:03:17.159 Message: lib/ethdev: Defining dependency "ethdev" 00:03:17.159 Message: lib/pci: Defining dependency "pci" 00:03:17.159 Message: lib/cmdline: Defining dependency "cmdline" 00:03:17.159 Message: lib/hash: Defining dependency "hash" 00:03:17.159 Message: lib/timer: Defining dependency "timer" 00:03:17.159 Message: lib/compressdev: Defining dependency "compressdev" 00:03:17.159 Message: lib/cryptodev: Defining dependency "cryptodev" 00:03:17.159 Message: lib/dmadev: Defining dependency "dmadev" 00:03:17.159 Compiler for C supports arguments -Wno-cast-qual: YES 00:03:17.159 Message: lib/reorder: Defining dependency "reorder" 00:03:17.159 Message: lib/security: Defining dependency "security" 00:03:17.159 Has header "linux/userfaultfd.h" : NO 00:03:17.159 Has header "linux/vduse.h" : NO 00:03:17.159 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:03:17.159 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:03:17.159 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:03:17.159 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:03:17.159 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:03:17.159 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:03:17.159 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:03:17.160 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:03:17.160 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:03:17.160 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:03:17.160 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:03:17.160 Program doxygen found: YES (/usr/local/bin/doxygen) 00:03:17.160 Configuring doxy-api-html.conf using configuration 00:03:17.160 Configuring doxy-api-man.conf using configuration 00:03:17.160 Program mandb found: NO 00:03:17.160 Program sphinx-build found: NO 00:03:17.160 Configuring rte_build_config.h using configuration 00:03:17.160 Message: 00:03:17.160 ================= 00:03:17.160 Applications Enabled 00:03:17.160 ================= 00:03:17.160 00:03:17.160 apps: 00:03:17.160 00:03:17.160 00:03:17.160 Message: 00:03:17.160 ================= 00:03:17.160 Libraries Enabled 00:03:17.160 ================= 00:03:17.160 00:03:17.160 libs: 00:03:17.160 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:03:17.160 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:03:17.160 cryptodev, dmadev, reorder, security, 00:03:17.160 00:03:17.160 Message: 00:03:17.160 =============== 00:03:17.160 Drivers Enabled 00:03:17.160 =============== 00:03:17.160 00:03:17.160 common: 00:03:17.160 00:03:17.160 bus: 00:03:17.160 pci, vdev, 00:03:17.160 mempool: 00:03:17.160 ring, 00:03:17.160 dma: 00:03:17.160 00:03:17.160 net: 00:03:17.160 00:03:17.160 crypto: 00:03:17.160 00:03:17.160 compress: 00:03:17.160 00:03:17.160 00:03:17.160 Message: 00:03:17.160 ================= 00:03:17.160 Content Skipped 00:03:17.160 ================= 00:03:17.160 00:03:17.160 apps: 00:03:17.160 dumpcap: explicitly disabled via build config 00:03:17.160 graph: explicitly disabled via build config 00:03:17.160 pdump: explicitly disabled via build config 00:03:17.160 proc-info: explicitly disabled via build config 00:03:17.160 test-acl: explicitly disabled via build config 00:03:17.160 test-bbdev: explicitly disabled via build config 00:03:17.160 test-cmdline: explicitly disabled via build config 00:03:17.160 test-compress-perf: explicitly disabled via build config 00:03:17.160 test-crypto-perf: explicitly disabled via build config 00:03:17.160 test-dma-perf: explicitly disabled via build config 00:03:17.160 test-eventdev: explicitly disabled via build config 00:03:17.160 test-fib: explicitly disabled via build config 00:03:17.160 test-flow-perf: explicitly disabled via build config 00:03:17.160 test-gpudev: explicitly disabled via build config 00:03:17.160 test-mldev: explicitly disabled via build config 00:03:17.160 test-pipeline: explicitly disabled via build config 00:03:17.160 test-pmd: explicitly disabled via build config 00:03:17.160 test-regex: explicitly disabled via build config 00:03:17.160 test-sad: explicitly disabled via build config 00:03:17.160 test-security-perf: explicitly disabled via build config 00:03:17.160 00:03:17.160 libs: 00:03:17.160 argparse: explicitly disabled via build config 00:03:17.160 metrics: explicitly disabled via build config 00:03:17.160 acl: explicitly disabled via build config 00:03:17.160 bbdev: explicitly disabled via build config 00:03:17.160 bitratestats: explicitly disabled via build config 00:03:17.160 bpf: explicitly disabled via build config 00:03:17.160 cfgfile: explicitly disabled via build config 00:03:17.160 distributor: explicitly disabled via build config 00:03:17.160 efd: explicitly disabled via build config 00:03:17.160 eventdev: explicitly disabled via build config 00:03:17.160 dispatcher: explicitly disabled via build config 00:03:17.160 gpudev: explicitly disabled via build config 00:03:17.160 gro: explicitly disabled via build config 00:03:17.160 gso: explicitly disabled via build config 00:03:17.160 ip_frag: explicitly disabled via build config 00:03:17.160 jobstats: explicitly disabled via build config 00:03:17.160 latencystats: explicitly disabled via build config 00:03:17.160 lpm: explicitly disabled via build config 00:03:17.160 member: explicitly disabled via build config 00:03:17.160 pcapng: explicitly disabled via build config 00:03:17.160 power: only supported on Linux 00:03:17.160 rawdev: explicitly disabled via build config 00:03:17.160 regexdev: explicitly disabled via build config 00:03:17.160 mldev: explicitly disabled via build config 00:03:17.160 rib: explicitly disabled via build config 00:03:17.160 sched: explicitly disabled via build config 00:03:17.160 stack: explicitly disabled via build config 00:03:17.160 vhost: only supported on Linux 00:03:17.160 ipsec: explicitly disabled via build config 00:03:17.160 pdcp: explicitly disabled via build config 00:03:17.160 fib: explicitly disabled via build config 00:03:17.160 port: explicitly disabled via build config 00:03:17.160 pdump: explicitly disabled via build config 00:03:17.160 table: explicitly disabled via build config 00:03:17.160 pipeline: explicitly disabled via build config 00:03:17.160 graph: explicitly disabled via build config 00:03:17.160 node: explicitly disabled via build config 00:03:17.160 00:03:17.160 drivers: 00:03:17.160 common/cpt: not in enabled drivers build config 00:03:17.160 common/dpaax: not in enabled drivers build config 00:03:17.160 common/iavf: not in enabled drivers build config 00:03:17.160 common/idpf: not in enabled drivers build config 00:03:17.160 common/ionic: not in enabled drivers build config 00:03:17.160 common/mvep: not in enabled drivers build config 00:03:17.160 common/octeontx: not in enabled drivers build config 00:03:17.160 bus/auxiliary: not in enabled drivers build config 00:03:17.160 bus/cdx: not in enabled drivers build config 00:03:17.160 bus/dpaa: not in enabled drivers build config 00:03:17.160 bus/fslmc: not in enabled drivers build config 00:03:17.160 bus/ifpga: not in enabled drivers build config 00:03:17.160 bus/platform: not in enabled drivers build config 00:03:17.160 bus/uacce: not in enabled drivers build config 00:03:17.160 bus/vmbus: not in enabled drivers build config 00:03:17.160 common/cnxk: not in enabled drivers build config 00:03:17.160 common/mlx5: not in enabled drivers build config 00:03:17.160 common/nfp: not in enabled drivers build config 00:03:17.160 common/nitrox: not in enabled drivers build config 00:03:17.160 common/qat: not in enabled drivers build config 00:03:17.160 common/sfc_efx: not in enabled drivers build config 00:03:17.160 mempool/bucket: not in enabled drivers build config 00:03:17.160 mempool/cnxk: not in enabled drivers build config 00:03:17.160 mempool/dpaa: not in enabled drivers build config 00:03:17.160 mempool/dpaa2: not in enabled drivers build config 00:03:17.160 mempool/octeontx: not in enabled drivers build config 00:03:17.160 mempool/stack: not in enabled drivers build config 00:03:17.160 dma/cnxk: not in enabled drivers build config 00:03:17.160 dma/dpaa: not in enabled drivers build config 00:03:17.160 dma/dpaa2: not in enabled drivers build config 00:03:17.160 dma/hisilicon: not in enabled drivers build config 00:03:17.160 dma/idxd: not in enabled drivers build config 00:03:17.160 dma/ioat: not in enabled drivers build config 00:03:17.160 dma/skeleton: not in enabled drivers build config 00:03:17.160 net/af_packet: not in enabled drivers build config 00:03:17.160 net/af_xdp: not in enabled drivers build config 00:03:17.160 net/ark: not in enabled drivers build config 00:03:17.160 net/atlantic: not in enabled drivers build config 00:03:17.160 net/avp: not in enabled drivers build config 00:03:17.160 net/axgbe: not in enabled drivers build config 00:03:17.160 net/bnx2x: not in enabled drivers build config 00:03:17.160 net/bnxt: not in enabled drivers build config 00:03:17.160 net/bonding: not in enabled drivers build config 00:03:17.160 net/cnxk: not in enabled drivers build config 00:03:17.160 net/cpfl: not in enabled drivers build config 00:03:17.160 net/cxgbe: not in enabled drivers build config 00:03:17.160 net/dpaa: not in enabled drivers build config 00:03:17.160 net/dpaa2: not in enabled drivers build config 00:03:17.160 net/e1000: not in enabled drivers build config 00:03:17.160 net/ena: not in enabled drivers build config 00:03:17.160 net/enetc: not in enabled drivers build config 00:03:17.160 net/enetfec: not in enabled drivers build config 00:03:17.160 net/enic: not in enabled drivers build config 00:03:17.160 net/failsafe: not in enabled drivers build config 00:03:17.160 net/fm10k: not in enabled drivers build config 00:03:17.160 net/gve: not in enabled drivers build config 00:03:17.160 net/hinic: not in enabled drivers build config 00:03:17.160 net/hns3: not in enabled drivers build config 00:03:17.160 net/i40e: not in enabled drivers build config 00:03:17.160 net/iavf: not in enabled drivers build config 00:03:17.160 net/ice: not in enabled drivers build config 00:03:17.160 net/idpf: not in enabled drivers build config 00:03:17.160 net/igc: not in enabled drivers build config 00:03:17.160 net/ionic: not in enabled drivers build config 00:03:17.160 net/ipn3ke: not in enabled drivers build config 00:03:17.160 net/ixgbe: not in enabled drivers build config 00:03:17.160 net/mana: not in enabled drivers build config 00:03:17.160 net/memif: not in enabled drivers build config 00:03:17.160 net/mlx4: not in enabled drivers build config 00:03:17.160 net/mlx5: not in enabled drivers build config 00:03:17.160 net/mvneta: not in enabled drivers build config 00:03:17.160 net/mvpp2: not in enabled drivers build config 00:03:17.160 net/netvsc: not in enabled drivers build config 00:03:17.160 net/nfb: not in enabled drivers build config 00:03:17.160 net/nfp: not in enabled drivers build config 00:03:17.160 net/ngbe: not in enabled drivers build config 00:03:17.160 net/null: not in enabled drivers build config 00:03:17.160 net/octeontx: not in enabled drivers build config 00:03:17.160 net/octeon_ep: not in enabled drivers build config 00:03:17.160 net/pcap: not in enabled drivers build config 00:03:17.160 net/pfe: not in enabled drivers build config 00:03:17.160 net/qede: not in enabled drivers build config 00:03:17.160 net/ring: not in enabled drivers build config 00:03:17.160 net/sfc: not in enabled drivers build config 00:03:17.160 net/softnic: not in enabled drivers build config 00:03:17.160 net/tap: not in enabled drivers build config 00:03:17.160 net/thunderx: not in enabled drivers build config 00:03:17.160 net/txgbe: not in enabled drivers build config 00:03:17.160 net/vdev_netvsc: not in enabled drivers build config 00:03:17.160 net/vhost: not in enabled drivers build config 00:03:17.160 net/virtio: not in enabled drivers build config 00:03:17.160 net/vmxnet3: not in enabled drivers build config 00:03:17.160 raw/*: missing internal dependency, "rawdev" 00:03:17.160 crypto/armv8: not in enabled drivers build config 00:03:17.160 crypto/bcmfs: not in enabled drivers build config 00:03:17.160 crypto/caam_jr: not in enabled drivers build config 00:03:17.160 crypto/ccp: not in enabled drivers build config 00:03:17.160 crypto/cnxk: not in enabled drivers build config 00:03:17.160 crypto/dpaa_sec: not in enabled drivers build config 00:03:17.160 crypto/dpaa2_sec: not in enabled drivers build config 00:03:17.160 crypto/ipsec_mb: not in enabled drivers build config 00:03:17.161 crypto/mlx5: not in enabled drivers build config 00:03:17.161 crypto/mvsam: not in enabled drivers build config 00:03:17.161 crypto/nitrox: not in enabled drivers build config 00:03:17.161 crypto/null: not in enabled drivers build config 00:03:17.161 crypto/octeontx: not in enabled drivers build config 00:03:17.161 crypto/openssl: not in enabled drivers build config 00:03:17.161 crypto/scheduler: not in enabled drivers build config 00:03:17.161 crypto/uadk: not in enabled drivers build config 00:03:17.161 crypto/virtio: not in enabled drivers build config 00:03:17.161 compress/isal: not in enabled drivers build config 00:03:17.161 compress/mlx5: not in enabled drivers build config 00:03:17.161 compress/nitrox: not in enabled drivers build config 00:03:17.161 compress/octeontx: not in enabled drivers build config 00:03:17.161 compress/zlib: not in enabled drivers build config 00:03:17.161 regex/*: missing internal dependency, "regexdev" 00:03:17.161 ml/*: missing internal dependency, "mldev" 00:03:17.161 vdpa/*: missing internal dependency, "vhost" 00:03:17.161 event/*: missing internal dependency, "eventdev" 00:03:17.161 baseband/*: missing internal dependency, "bbdev" 00:03:17.161 gpu/*: missing internal dependency, "gpudev" 00:03:17.161 00:03:17.161 00:03:17.161 Build targets in project: 81 00:03:17.161 00:03:17.161 DPDK 24.03.0 00:03:17.161 00:03:17.161 User defined options 00:03:17.161 buildtype : debug 00:03:17.161 default_library : static 00:03:17.161 libdir : lib 00:03:17.161 prefix : / 00:03:17.161 c_args : -fPIC -Werror 00:03:17.161 c_link_args : 00:03:17.161 cpu_instruction_set: native 00:03:17.161 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:03:17.161 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:03:17.161 enable_docs : false 00:03:17.161 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:03:17.161 enable_kmods : true 00:03:17.161 max_lcores : 128 00:03:17.161 tests : false 00:03:17.161 00:03:17.161 Found ninja-1.11.1 at /usr/local/bin/ninja 00:03:17.419 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:03:17.419 [1/233] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:03:17.677 [2/233] Compiling C object lib/librte_log.a.p/log_log.c.o 00:03:17.677 [3/233] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:03:17.677 [4/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:03:17.677 [5/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:03:17.677 [6/233] Linking static target lib/librte_kvargs.a 00:03:17.677 [7/233] Linking static target lib/librte_log.a 00:03:17.677 [8/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:03:17.934 [9/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:03:17.934 [10/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:03:17.934 [11/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:03:17.934 [12/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:03:17.934 [13/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:03:17.934 [14/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:03:17.934 [15/233] Linking static target lib/librte_telemetry.a 00:03:17.934 [16/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:03:18.192 [17/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:03:18.192 [18/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:03:18.192 [19/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:03:18.192 [20/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:03:18.451 [21/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:03:18.451 [22/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:03:18.451 [23/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:03:18.451 [24/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:03:18.451 [25/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:03:18.451 [26/233] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:03:18.451 [27/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:03:18.710 [28/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:03:18.710 [29/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:03:18.710 [30/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:03:18.710 [31/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:03:18.710 [32/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:03:18.710 [33/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:03:18.710 [34/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:03:18.710 [35/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:03:18.969 [36/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:03:18.969 [37/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:03:18.969 [38/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:03:18.969 [39/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:03:18.969 [40/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:03:18.969 [41/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:03:18.969 [42/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:03:19.227 [43/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:03:19.228 [44/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:03:19.228 [45/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:03:19.228 [46/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:03:19.228 [47/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:03:19.486 [48/233] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:03:19.486 [49/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:03:19.486 [50/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:03:19.486 [51/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:03:19.486 [52/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:03:19.486 [53/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:03:19.486 [54/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:03:19.744 [55/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:03:19.744 [56/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:03:19.744 [57/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:03:19.744 [58/233] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:03:20.001 [59/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:03:20.001 [60/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:03:20.001 [61/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:03:20.001 [62/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:03:20.001 [63/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:03:20.001 [64/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:03:20.001 [65/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:03:20.001 [66/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:03:20.001 [67/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:03:20.001 [68/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:03:20.260 [69/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:03:20.260 [70/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:03:20.260 [71/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:03:20.260 [72/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:03:20.517 [73/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:03:20.517 [74/233] Linking static target lib/librte_eal.a 00:03:20.517 [75/233] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:03:20.517 [76/233] Linking static target lib/librte_ring.a 00:03:20.517 [77/233] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:03:20.517 [78/233] Linking static target lib/librte_rcu.a 00:03:20.517 [79/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:03:20.517 [80/233] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.517 [81/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:03:20.517 [82/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:03:20.775 [83/233] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:03:20.775 [84/233] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:03:20.775 [85/233] Linking static target lib/librte_mempool.a 00:03:20.775 [86/233] Linking target lib/librte_log.so.24.1 00:03:20.775 [87/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:03:20.775 [88/233] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:03:20.775 [89/233] Linking target lib/librte_kvargs.so.24.1 00:03:20.775 [90/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:03:21.032 [91/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:03:21.032 [92/233] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.032 [93/233] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.032 [94/233] Linking target lib/librte_telemetry.so.24.1 00:03:21.032 [95/233] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:03:21.032 [96/233] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:03:21.032 [97/233] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:03:21.032 [98/233] Linking static target lib/net/libnet_crc_avx512_lib.a 00:03:21.032 [99/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:03:21.032 [100/233] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:03:21.289 [101/233] Linking static target lib/librte_mbuf.a 00:03:21.289 [102/233] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:03:21.289 [103/233] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:03:21.289 [104/233] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:03:21.289 [105/233] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:03:21.289 [106/233] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:03:21.289 [107/233] Linking static target lib/librte_net.a 00:03:21.289 [108/233] Linking static target lib/librte_meter.a 00:03:21.547 [109/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:03:21.547 [110/233] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.547 [111/233] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:03:21.547 [112/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:03:21.805 [113/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:03:21.805 [114/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:03:22.063 [115/233] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.063 [116/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:03:22.321 [117/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:03:22.321 [118/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:03:22.321 [119/233] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:03:22.321 [120/233] Linking static target lib/librte_pci.a 00:03:22.321 [121/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:03:22.321 [122/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:03:22.321 [123/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:03:22.321 [124/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:03:22.321 [125/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:03:22.580 [126/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:03:22.580 [127/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:03:22.580 [128/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:03:22.580 [129/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:03:22.580 [130/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:03:22.580 [131/233] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.580 [132/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:03:22.580 [133/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:03:22.580 [134/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:03:22.580 [135/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:03:22.580 [136/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:03:22.580 [137/233] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:03:22.580 [138/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:03:22.855 [139/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:03:22.855 [140/233] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:03:22.855 [141/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:03:22.855 [142/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:03:22.855 [143/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:03:22.855 [144/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:03:22.855 [145/233] Linking static target lib/librte_cmdline.a 00:03:23.125 [146/233] Linking static target lib/librte_ethdev.a 00:03:23.125 [147/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:03:23.125 [148/233] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:03:23.125 [149/233] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:03:23.125 [150/233] Linking static target lib/librte_timer.a 00:03:23.125 [151/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:03:23.383 [152/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:03:23.383 [153/233] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:03:23.383 [154/233] Linking static target lib/librte_hash.a 00:03:23.383 [155/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:03:23.383 [156/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:03:23.383 [157/233] Linking static target lib/librte_compressdev.a 00:03:23.383 [158/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:03:23.642 [159/233] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:03:23.642 [160/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:03:23.901 [161/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:03:23.901 [162/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:03:23.901 [163/233] Linking static target lib/librte_dmadev.a 00:03:23.901 [164/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:03:23.901 [165/233] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.159 [166/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:03:24.159 [167/233] Linking static target lib/librte_cryptodev.a 00:03:24.159 [168/233] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.159 [169/233] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:03:24.160 [170/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:03:24.160 [171/233] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.160 [172/233] Linking static target lib/librte_reorder.a 00:03:24.160 [173/233] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:03:24.160 [174/233] Linking static target lib/librte_security.a 00:03:24.160 [175/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:03:24.160 [176/233] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.418 [177/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:03:24.418 [178/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:03:24.418 [179/233] Linking static target drivers/libtmp_rte_bus_pci.a 00:03:24.418 [180/233] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.418 [181/233] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.677 [182/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:03:24.677 [183/233] Linking static target drivers/libtmp_rte_bus_vdev.a 00:03:24.677 [184/233] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:03:24.677 [185/233] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:24.677 [186/233] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:03:24.677 [187/233] Linking static target drivers/librte_bus_pci.a 00:03:24.677 [188/233] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:03:24.677 [189/233] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:24.677 [190/233] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:03:24.677 [191/233] Linking static target drivers/librte_bus_vdev.a 00:03:24.936 [192/233] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:03:24.936 [193/233] Linking static target drivers/libtmp_rte_mempool_ring.a 00:03:24.936 [194/233] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.936 [195/233] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.936 [196/233] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:03:24.936 [197/233] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:03:24.936 [198/233] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:24.936 [199/233] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:03:25.196 [200/233] Linking static target drivers/librte_mempool_ring.a 00:03:26.573 [201/233] Generating kernel/freebsd/contigmem with a custom command 00:03:26.573 machine -> /usr/src/sys/amd64/include 00:03:26.573 x86 -> /usr/src/sys/x86/include 00:03:26.573 i386 -> /usr/src/sys/i386/include 00:03:26.573 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:03:26.573 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:03:26.573 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:03:26.573 touch opt_global.h 00:03:26.573 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:03:26.573 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:03:26.573 :> export_syms 00:03:26.573 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:03:26.573 objcopy --strip-debug contigmem.ko 00:03:27.141 [202/233] Generating kernel/freebsd/nic_uio with a custom command 00:03:27.141 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:03:27.141 ld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:03:27.141 :> export_syms 00:03:27.141 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:03:27.141 objcopy --strip-debug nic_uio.ko 00:03:29.039 [203/233] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.621 [204/233] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:03:31.621 [205/233] Linking target lib/librte_eal.so.24.1 00:03:31.621 [206/233] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:03:31.621 [207/233] Linking target lib/librte_dmadev.so.24.1 00:03:31.621 [208/233] Linking target lib/librte_meter.so.24.1 00:03:31.621 [209/233] Linking target lib/librte_ring.so.24.1 00:03:31.621 [210/233] Linking target lib/librte_timer.so.24.1 00:03:31.621 [211/233] Linking target drivers/librte_bus_vdev.so.24.1 00:03:31.621 [212/233] Linking target lib/librte_pci.so.24.1 00:03:31.621 [213/233] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:03:31.621 [214/233] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:03:31.621 [215/233] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:03:31.621 [216/233] Linking target drivers/librte_bus_pci.so.24.1 00:03:31.621 [217/233] Linking target lib/librte_rcu.so.24.1 00:03:31.621 [218/233] Linking target lib/librte_mempool.so.24.1 00:03:31.621 [219/233] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:03:31.621 [220/233] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:03:31.621 [221/233] Linking target lib/librte_mbuf.so.24.1 00:03:31.621 [222/233] Linking target drivers/librte_mempool_ring.so.24.1 00:03:31.877 [223/233] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:03:31.877 [224/233] Linking target lib/librte_net.so.24.1 00:03:31.877 [225/233] Linking target lib/librte_reorder.so.24.1 00:03:31.877 [226/233] Linking target lib/librte_cryptodev.so.24.1 00:03:31.877 [227/233] Linking target lib/librte_compressdev.so.24.1 00:03:32.133 [228/233] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:03:32.133 [229/233] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:03:32.133 [230/233] Linking target lib/librte_hash.so.24.1 00:03:32.133 [231/233] Linking target lib/librte_security.so.24.1 00:03:32.133 [232/233] Linking target lib/librte_cmdline.so.24.1 00:03:32.133 [233/233] Linking target lib/librte_ethdev.so.24.1 00:03:32.133 INFO: autodetecting backend as ninja 00:03:32.133 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:03:33.067 CC lib/log/log_flags.o 00:03:33.067 CC lib/log/log.o 00:03:33.067 CC lib/log/log_deprecated.o 00:03:33.067 CC lib/ut/ut.o 00:03:33.067 CC lib/ut_mock/mock.o 00:03:33.067 LIB libspdk_ut_mock.a 00:03:33.067 LIB libspdk_log.a 00:03:33.067 LIB libspdk_ut.a 00:03:33.067 CC lib/util/base64.o 00:03:33.067 CC lib/util/bit_array.o 00:03:33.067 CC lib/dma/dma.o 00:03:33.067 CC lib/util/crc16.o 00:03:33.067 CXX lib/trace_parser/trace.o 00:03:33.067 CC lib/util/cpuset.o 00:03:33.067 CC lib/util/crc32.o 00:03:33.067 CC lib/util/crc32c.o 00:03:33.067 CC lib/util/crc32_ieee.o 00:03:33.067 CC lib/ioat/ioat.o 00:03:33.324 CC lib/util/crc64.o 00:03:33.324 CC lib/util/dif.o 00:03:33.324 CC lib/util/fd.o 00:03:33.324 CC lib/util/file.o 00:03:33.324 CC lib/util/hexlify.o 00:03:33.324 LIB libspdk_dma.a 00:03:33.324 CC lib/util/iov.o 00:03:33.324 CC lib/util/math.o 00:03:33.324 CC lib/util/pipe.o 00:03:33.324 LIB libspdk_ioat.a 00:03:33.324 CC lib/util/strerror_tls.o 00:03:33.324 CC lib/util/string.o 00:03:33.324 CC lib/util/uuid.o 00:03:33.324 CC lib/util/fd_group.o 00:03:33.324 CC lib/util/xor.o 00:03:33.324 CC lib/util/zipf.o 00:03:33.324 LIB libspdk_util.a 00:03:33.582 CC lib/rdma_utils/rdma_utils.o 00:03:33.582 CC lib/conf/conf.o 00:03:33.582 CC lib/json/json_parse.o 00:03:33.582 CC lib/json/json_util.o 00:03:33.582 CC lib/json/json_write.o 00:03:33.582 CC lib/rdma_provider/common.o 00:03:33.582 CC lib/vmd/vmd.o 00:03:33.582 CC lib/env_dpdk/env.o 00:03:33.582 CC lib/idxd/idxd.o 00:03:33.582 CC lib/vmd/led.o 00:03:33.582 CC lib/rdma_provider/rdma_provider_verbs.o 00:03:33.582 LIB libspdk_conf.a 00:03:33.582 CC lib/idxd/idxd_user.o 00:03:33.582 CC lib/env_dpdk/memory.o 00:03:33.582 LIB libspdk_json.a 00:03:33.582 CC lib/env_dpdk/pci.o 00:03:33.582 LIB libspdk_rdma_utils.a 00:03:33.582 CC lib/env_dpdk/init.o 00:03:33.840 CC lib/env_dpdk/threads.o 00:03:33.840 LIB libspdk_vmd.a 00:03:33.840 CC lib/env_dpdk/pci_ioat.o 00:03:33.840 CC lib/env_dpdk/pci_virtio.o 00:03:33.840 LIB libspdk_idxd.a 00:03:33.840 LIB libspdk_rdma_provider.a 00:03:33.840 CC lib/env_dpdk/pci_vmd.o 00:03:33.840 CC lib/env_dpdk/pci_idxd.o 00:03:33.840 CC lib/env_dpdk/pci_event.o 00:03:33.840 CC lib/env_dpdk/sigbus_handler.o 00:03:33.840 CC lib/env_dpdk/pci_dpdk.o 00:03:33.840 CC lib/env_dpdk/pci_dpdk_2207.o 00:03:33.840 CC lib/env_dpdk/pci_dpdk_2211.o 00:03:33.840 CC lib/jsonrpc/jsonrpc_server.o 00:03:33.840 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:03:33.840 CC lib/jsonrpc/jsonrpc_client.o 00:03:33.840 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:03:34.098 LIB libspdk_jsonrpc.a 00:03:34.098 CC lib/rpc/rpc.o 00:03:34.098 LIB libspdk_rpc.a 00:03:34.355 LIB libspdk_env_dpdk.a 00:03:34.355 CC lib/notify/notify_rpc.o 00:03:34.355 CC lib/notify/notify.o 00:03:34.355 CC lib/keyring/keyring.o 00:03:34.355 CC lib/keyring/keyring_rpc.o 00:03:34.355 CC lib/trace/trace.o 00:03:34.355 CC lib/trace/trace_flags.o 00:03:34.355 CC lib/trace/trace_rpc.o 00:03:34.355 LIB libspdk_notify.a 00:03:34.355 LIB libspdk_keyring.a 00:03:34.355 LIB libspdk_trace.a 00:03:34.612 CC lib/sock/sock.o 00:03:34.613 CC lib/sock/sock_rpc.o 00:03:34.613 CC lib/thread/thread.o 00:03:34.613 CC lib/thread/iobuf.o 00:03:34.613 LIB libspdk_sock.a 00:03:34.871 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:34.871 CC lib/nvme/nvme_ctrlr.o 00:03:34.871 CC lib/nvme/nvme_fabric.o 00:03:34.871 CC lib/nvme/nvme_ns_cmd.o 00:03:34.871 CC lib/nvme/nvme_pcie_common.o 00:03:34.871 CC lib/nvme/nvme_ns.o 00:03:34.871 CC lib/nvme/nvme_pcie.o 00:03:34.871 CC lib/nvme/nvme_qpair.o 00:03:34.871 LIB libspdk_thread.a 00:03:34.871 LIB libspdk_trace_parser.a 00:03:34.871 CC lib/nvme/nvme.o 00:03:34.871 CC lib/accel/accel.o 00:03:35.129 CC lib/accel/accel_rpc.o 00:03:35.129 CC lib/accel/accel_sw.o 00:03:35.386 LIB libspdk_accel.a 00:03:35.386 CC lib/nvme/nvme_quirks.o 00:03:35.386 CC lib/nvme/nvme_transport.o 00:03:35.386 CC lib/nvme/nvme_discovery.o 00:03:35.386 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:35.386 CC lib/blob/blobstore.o 00:03:35.386 CC lib/init/json_config.o 00:03:35.386 CC lib/blob/request.o 00:03:35.386 CC lib/bdev/bdev.o 00:03:35.386 CC lib/init/subsystem.o 00:03:35.386 CC lib/bdev/bdev_rpc.o 00:03:35.386 CC lib/blob/zeroes.o 00:03:35.644 CC lib/bdev/bdev_zone.o 00:03:35.644 CC lib/init/subsystem_rpc.o 00:03:35.644 CC lib/bdev/part.o 00:03:35.644 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:35.644 CC lib/init/rpc.o 00:03:35.644 CC lib/nvme/nvme_tcp.o 00:03:35.644 CC lib/nvme/nvme_opal.o 00:03:35.644 LIB libspdk_init.a 00:03:35.644 CC lib/blob/blob_bs_dev.o 00:03:35.644 CC lib/bdev/scsi_nvme.o 00:03:35.902 CC lib/nvme/nvme_io_msg.o 00:03:35.902 CC lib/nvme/nvme_poll_group.o 00:03:35.902 CC lib/nvme/nvme_zns.o 00:03:35.902 CC lib/nvme/nvme_stubs.o 00:03:35.902 CC lib/event/app.o 00:03:35.902 CC lib/event/reactor.o 00:03:35.902 LIB libspdk_blob.a 00:03:35.902 CC lib/nvme/nvme_auth.o 00:03:35.902 LIB libspdk_bdev.a 00:03:35.902 CC lib/event/log_rpc.o 00:03:35.902 CC lib/nvme/nvme_rdma.o 00:03:36.161 CC lib/blobfs/blobfs.o 00:03:36.161 CC lib/event/app_rpc.o 00:03:36.161 CC lib/event/scheduler_static.o 00:03:36.161 LIB libspdk_event.a 00:03:36.161 CC lib/blobfs/tree.o 00:03:36.161 CC lib/lvol/lvol.o 00:03:36.161 LIB libspdk_blobfs.a 00:03:36.419 CC lib/scsi/dev.o 00:03:36.419 CC lib/scsi/lun.o 00:03:36.419 CC lib/scsi/port.o 00:03:36.419 CC lib/scsi/scsi.o 00:03:36.419 CC lib/scsi/scsi_bdev.o 00:03:36.419 CC lib/scsi/scsi_pr.o 00:03:36.419 CC lib/scsi/scsi_rpc.o 00:03:36.419 CC lib/scsi/task.o 00:03:36.419 LIB libspdk_lvol.a 00:03:36.419 LIB libspdk_scsi.a 00:03:36.677 CC lib/iscsi/init_grp.o 00:03:36.678 CC lib/iscsi/conn.o 00:03:36.678 CC lib/iscsi/iscsi.o 00:03:36.678 CC lib/iscsi/md5.o 00:03:36.678 CC lib/iscsi/param.o 00:03:36.678 CC lib/iscsi/portal_grp.o 00:03:36.678 CC lib/iscsi/tgt_node.o 00:03:36.678 CC lib/iscsi/iscsi_subsystem.o 00:03:36.678 CC lib/iscsi/iscsi_rpc.o 00:03:36.678 CC lib/iscsi/task.o 00:03:36.678 LIB libspdk_nvme.a 00:03:36.936 CC lib/nvmf/ctrlr.o 00:03:36.936 CC lib/nvmf/ctrlr_discovery.o 00:03:36.936 CC lib/nvmf/ctrlr_bdev.o 00:03:36.936 CC lib/nvmf/subsystem.o 00:03:36.936 CC lib/nvmf/nvmf.o 00:03:36.936 CC lib/nvmf/nvmf_rpc.o 00:03:36.936 CC lib/nvmf/transport.o 00:03:36.936 CC lib/nvmf/tcp.o 00:03:36.936 CC lib/nvmf/stubs.o 00:03:36.936 LIB libspdk_iscsi.a 00:03:36.936 CC lib/nvmf/mdns_server.o 00:03:36.936 CC lib/nvmf/rdma.o 00:03:36.936 CC lib/nvmf/auth.o 00:03:37.503 LIB libspdk_nvmf.a 00:03:37.503 CC module/env_dpdk/env_dpdk_rpc.o 00:03:37.503 CC module/blob/bdev/blob_bdev.o 00:03:37.503 CC module/keyring/file/keyring.o 00:03:37.503 CC module/keyring/file/keyring_rpc.o 00:03:37.503 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:37.503 CC module/accel/error/accel_error.o 00:03:37.503 CC module/accel/iaa/accel_iaa.o 00:03:37.503 CC module/sock/posix/posix.o 00:03:37.503 CC module/accel/dsa/accel_dsa.o 00:03:37.503 CC module/accel/ioat/accel_ioat.o 00:03:37.503 LIB libspdk_env_dpdk_rpc.a 00:03:37.503 CC module/accel/iaa/accel_iaa_rpc.o 00:03:37.760 CC module/accel/dsa/accel_dsa_rpc.o 00:03:37.760 LIB libspdk_keyring_file.a 00:03:37.760 CC module/accel/error/accel_error_rpc.o 00:03:37.760 LIB libspdk_scheduler_dynamic.a 00:03:37.760 CC module/accel/ioat/accel_ioat_rpc.o 00:03:37.760 LIB libspdk_accel_iaa.a 00:03:37.760 LIB libspdk_blob_bdev.a 00:03:37.760 LIB libspdk_accel_dsa.a 00:03:37.760 LIB libspdk_accel_error.a 00:03:37.760 LIB libspdk_accel_ioat.a 00:03:37.760 CC module/bdev/gpt/gpt.o 00:03:37.760 CC module/bdev/error/vbdev_error.o 00:03:37.760 CC module/bdev/delay/vbdev_delay.o 00:03:37.760 CC module/blobfs/bdev/blobfs_bdev.o 00:03:37.760 CC module/bdev/malloc/bdev_malloc.o 00:03:37.760 CC module/bdev/null/bdev_null.o 00:03:37.760 CC module/bdev/lvol/vbdev_lvol.o 00:03:37.760 CC module/bdev/nvme/bdev_nvme.o 00:03:37.760 CC module/bdev/passthru/vbdev_passthru.o 00:03:37.760 LIB libspdk_sock_posix.a 00:03:37.760 CC module/bdev/null/bdev_null_rpc.o 00:03:38.018 CC module/bdev/gpt/vbdev_gpt.o 00:03:38.018 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:38.018 CC module/bdev/error/vbdev_error_rpc.o 00:03:38.018 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:38.018 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:38.018 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:38.018 LIB libspdk_bdev_null.a 00:03:38.018 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:38.018 LIB libspdk_blobfs_bdev.a 00:03:38.018 LIB libspdk_bdev_error.a 00:03:38.018 LIB libspdk_bdev_gpt.a 00:03:38.018 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:38.018 LIB libspdk_bdev_passthru.a 00:03:38.018 CC module/bdev/nvme/nvme_rpc.o 00:03:38.018 CC module/bdev/nvme/bdev_mdns_client.o 00:03:38.018 LIB libspdk_bdev_delay.a 00:03:38.018 LIB libspdk_bdev_malloc.a 00:03:38.018 CC module/bdev/raid/bdev_raid.o 00:03:38.018 CC module/bdev/raid/bdev_raid_rpc.o 00:03:38.018 CC module/bdev/split/vbdev_split.o 00:03:38.018 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:38.018 CC module/bdev/aio/bdev_aio.o 00:03:38.299 LIB libspdk_bdev_lvol.a 00:03:38.299 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:38.299 CC module/bdev/aio/bdev_aio_rpc.o 00:03:38.299 CC module/bdev/split/vbdev_split_rpc.o 00:03:38.299 CC module/bdev/raid/bdev_raid_sb.o 00:03:38.299 CC module/bdev/raid/raid0.o 00:03:38.299 CC module/bdev/raid/raid1.o 00:03:38.299 CC module/bdev/raid/concat.o 00:03:38.299 LIB libspdk_bdev_zone_block.a 00:03:38.299 LIB libspdk_bdev_aio.a 00:03:38.299 LIB libspdk_bdev_split.a 00:03:38.299 LIB libspdk_bdev_nvme.a 00:03:38.299 LIB libspdk_bdev_raid.a 00:03:38.566 CC module/event/subsystems/keyring/keyring.o 00:03:38.566 CC module/event/subsystems/vmd/vmd.o 00:03:38.566 CC module/event/subsystems/iobuf/iobuf.o 00:03:38.566 CC module/event/subsystems/scheduler/scheduler.o 00:03:38.566 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:38.566 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:38.566 CC module/event/subsystems/sock/sock.o 00:03:38.566 LIB libspdk_event_keyring.a 00:03:38.566 LIB libspdk_event_scheduler.a 00:03:38.566 LIB libspdk_event_sock.a 00:03:38.566 LIB libspdk_event_vmd.a 00:03:38.566 LIB libspdk_event_iobuf.a 00:03:38.823 CC module/event/subsystems/accel/accel.o 00:03:38.823 LIB libspdk_event_accel.a 00:03:39.080 CC module/event/subsystems/bdev/bdev.o 00:03:39.080 LIB libspdk_event_bdev.a 00:03:39.080 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:39.080 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:39.080 CC module/event/subsystems/scsi/scsi.o 00:03:39.338 LIB libspdk_event_scsi.a 00:03:39.338 LIB libspdk_event_nvmf.a 00:03:39.338 CC module/event/subsystems/iscsi/iscsi.o 00:03:39.596 LIB libspdk_event_iscsi.a 00:03:39.596 CXX app/trace/trace.o 00:03:39.596 CC app/spdk_nvme_perf/perf.o 00:03:39.596 CC app/trace_record/trace_record.o 00:03:39.596 CC app/iscsi_tgt/iscsi_tgt.o 00:03:39.596 CC examples/ioat/perf/perf.o 00:03:39.596 CC app/spdk_lspci/spdk_lspci.o 00:03:39.596 CC app/nvmf_tgt/nvmf_main.o 00:03:39.596 CC app/spdk_tgt/spdk_tgt.o 00:03:39.596 CC test/thread/poller_perf/poller_perf.o 00:03:39.596 CC examples/util/zipf/zipf.o 00:03:39.596 LINK ioat_perf 00:03:39.596 LINK spdk_trace_record 00:03:39.596 LINK poller_perf 00:03:39.853 LINK spdk_lspci 00:03:39.853 LINK iscsi_tgt 00:03:39.853 LINK nvmf_tgt 00:03:39.853 LINK spdk_tgt 00:03:39.853 LINK zipf 00:03:39.853 CC examples/ioat/verify/verify.o 00:03:39.853 CC test/thread/lock/spdk_lock.o 00:03:39.853 CC app/spdk_nvme_identify/identify.o 00:03:39.853 CC test/dma/test_dma/test_dma.o 00:03:39.853 TEST_HEADER include/spdk/accel.h 00:03:39.853 TEST_HEADER include/spdk/accel_module.h 00:03:39.853 TEST_HEADER include/spdk/assert.h 00:03:39.853 LINK spdk_nvme_perf 00:03:39.853 TEST_HEADER include/spdk/barrier.h 00:03:39.853 TEST_HEADER include/spdk/base64.h 00:03:39.853 LINK verify 00:03:39.853 TEST_HEADER include/spdk/bdev.h 00:03:39.853 TEST_HEADER include/spdk/bdev_module.h 00:03:39.853 TEST_HEADER include/spdk/bdev_zone.h 00:03:39.853 TEST_HEADER include/spdk/bit_array.h 00:03:39.853 TEST_HEADER include/spdk/bit_pool.h 00:03:39.853 TEST_HEADER include/spdk/blob.h 00:03:39.853 TEST_HEADER include/spdk/blob_bdev.h 00:03:39.853 TEST_HEADER include/spdk/blobfs.h 00:03:39.853 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:39.853 TEST_HEADER include/spdk/conf.h 00:03:39.853 TEST_HEADER include/spdk/config.h 00:03:39.853 TEST_HEADER include/spdk/cpuset.h 00:03:39.853 TEST_HEADER include/spdk/crc16.h 00:03:39.853 TEST_HEADER include/spdk/crc32.h 00:03:39.853 TEST_HEADER include/spdk/crc64.h 00:03:39.853 TEST_HEADER include/spdk/dif.h 00:03:39.853 TEST_HEADER include/spdk/dma.h 00:03:39.853 TEST_HEADER include/spdk/endian.h 00:03:39.853 TEST_HEADER include/spdk/env.h 00:03:39.853 TEST_HEADER include/spdk/env_dpdk.h 00:03:39.853 TEST_HEADER include/spdk/event.h 00:03:39.853 TEST_HEADER include/spdk/fd.h 00:03:39.853 TEST_HEADER include/spdk/fd_group.h 00:03:39.853 TEST_HEADER include/spdk/file.h 00:03:39.853 TEST_HEADER include/spdk/ftl.h 00:03:39.853 TEST_HEADER include/spdk/gpt_spec.h 00:03:39.853 TEST_HEADER include/spdk/hexlify.h 00:03:39.853 TEST_HEADER include/spdk/histogram_data.h 00:03:39.854 TEST_HEADER include/spdk/idxd.h 00:03:39.854 TEST_HEADER include/spdk/idxd_spec.h 00:03:39.854 CC test/app/bdev_svc/bdev_svc.o 00:03:39.854 TEST_HEADER include/spdk/init.h 00:03:39.854 TEST_HEADER include/spdk/ioat.h 00:03:39.854 TEST_HEADER include/spdk/ioat_spec.h 00:03:39.854 TEST_HEADER include/spdk/iscsi_spec.h 00:03:39.854 TEST_HEADER include/spdk/json.h 00:03:39.854 TEST_HEADER include/spdk/jsonrpc.h 00:03:39.854 TEST_HEADER include/spdk/keyring.h 00:03:39.854 TEST_HEADER include/spdk/keyring_module.h 00:03:39.854 CC examples/thread/thread/thread_ex.o 00:03:39.854 TEST_HEADER include/spdk/likely.h 00:03:39.854 TEST_HEADER include/spdk/log.h 00:03:39.854 TEST_HEADER include/spdk/lvol.h 00:03:39.854 TEST_HEADER include/spdk/memory.h 00:03:39.854 TEST_HEADER include/spdk/mmio.h 00:03:39.854 TEST_HEADER include/spdk/nbd.h 00:03:39.854 TEST_HEADER include/spdk/notify.h 00:03:39.854 TEST_HEADER include/spdk/nvme.h 00:03:39.854 TEST_HEADER include/spdk/nvme_intel.h 00:03:39.854 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:39.854 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:39.854 TEST_HEADER include/spdk/nvme_spec.h 00:03:39.854 TEST_HEADER include/spdk/nvme_zns.h 00:03:39.854 TEST_HEADER include/spdk/nvmf.h 00:03:39.854 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:39.854 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:39.854 TEST_HEADER include/spdk/nvmf_spec.h 00:03:39.854 TEST_HEADER include/spdk/nvmf_transport.h 00:03:39.854 TEST_HEADER include/spdk/opal.h 00:03:39.854 TEST_HEADER include/spdk/opal_spec.h 00:03:39.854 TEST_HEADER include/spdk/pci_ids.h 00:03:39.854 TEST_HEADER include/spdk/pipe.h 00:03:39.854 TEST_HEADER include/spdk/queue.h 00:03:39.854 TEST_HEADER include/spdk/reduce.h 00:03:39.854 TEST_HEADER include/spdk/rpc.h 00:03:39.854 TEST_HEADER include/spdk/scheduler.h 00:03:39.854 TEST_HEADER include/spdk/scsi.h 00:03:40.112 TEST_HEADER include/spdk/scsi_spec.h 00:03:40.112 TEST_HEADER include/spdk/sock.h 00:03:40.112 TEST_HEADER include/spdk/stdinc.h 00:03:40.112 TEST_HEADER include/spdk/string.h 00:03:40.112 TEST_HEADER include/spdk/thread.h 00:03:40.112 TEST_HEADER include/spdk/trace.h 00:03:40.112 TEST_HEADER include/spdk/trace_parser.h 00:03:40.112 TEST_HEADER include/spdk/tree.h 00:03:40.112 TEST_HEADER include/spdk/ublk.h 00:03:40.112 TEST_HEADER include/spdk/util.h 00:03:40.112 TEST_HEADER include/spdk/uuid.h 00:03:40.112 TEST_HEADER include/spdk/version.h 00:03:40.112 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:40.112 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:40.112 TEST_HEADER include/spdk/vhost.h 00:03:40.112 TEST_HEADER include/spdk/vmd.h 00:03:40.112 TEST_HEADER include/spdk/xor.h 00:03:40.112 TEST_HEADER include/spdk/zipf.h 00:03:40.112 CXX test/cpp_headers/accel.o 00:03:40.112 CC app/spdk_nvme_discover/discovery_aer.o 00:03:40.112 LINK bdev_svc 00:03:40.112 LINK test_dma 00:03:40.112 CC examples/sock/hello_world/hello_sock.o 00:03:40.112 CC test/env/mem_callbacks/mem_callbacks.o 00:03:40.112 LINK thread 00:03:40.112 LINK spdk_nvme_identify 00:03:40.112 LINK spdk_nvme_discover 00:03:40.112 CXX test/cpp_headers/accel_module.o 00:03:40.112 LINK spdk_lock 00:03:40.112 LINK hello_sock 00:03:40.112 CC test/app/histogram_perf/histogram_perf.o 00:03:40.112 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:40.370 LINK histogram_perf 00:03:40.370 CC test/env/vtophys/vtophys.o 00:03:40.370 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:40.370 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:40.370 CXX test/cpp_headers/assert.o 00:03:40.370 CC test/rpc_client/rpc_client_test.o 00:03:40.370 LINK nvme_fuzz 00:03:40.370 CC examples/vmd/lsvmd/lsvmd.o 00:03:40.370 LINK vtophys 00:03:40.370 CC examples/vmd/led/led.o 00:03:40.370 LINK env_dpdk_post_init 00:03:40.370 LINK rpc_client_test 00:03:40.370 LINK lsvmd 00:03:40.370 LINK led 00:03:40.370 CXX test/cpp_headers/barrier.o 00:03:40.370 CC examples/idxd/perf/perf.o 00:03:40.370 LINK spdk_trace 00:03:40.651 CC app/spdk_top/spdk_top.o 00:03:40.651 CC test/env/memory/memory_ut.o 00:03:40.651 CC test/app/jsoncat/jsoncat.o 00:03:40.651 CC test/unit/include/spdk/histogram_data.h/histogram_ut.o 00:03:40.651 CC app/fio/nvme/fio_plugin.o 00:03:40.651 LINK idxd_perf 00:03:40.651 LINK jsoncat 00:03:40.651 LINK mem_callbacks 00:03:40.651 CXX test/cpp_headers/base64.o 00:03:40.651 LINK histogram_ut 00:03:40.651 CXX test/cpp_headers/bdev.o 00:03:40.651 CC test/unit/lib/log/log.c/log_ut.o 00:03:40.651 CC test/app/stub/stub.o 00:03:40.651 CC examples/accel/perf/accel_perf.o 00:03:40.651 LINK iscsi_fuzz 00:03:40.651 LINK spdk_top 00:03:40.651 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:03:40.651 struct spdk_nvme_fdp_ruhs ruhs; 00:03:40.651 ^ 00:03:40.909 LINK log_ut 00:03:40.909 CC examples/blob/hello_world/hello_blob.o 00:03:40.909 LINK stub 00:03:40.909 CXX test/cpp_headers/bdev_module.o 00:03:40.909 CC test/env/pci/pci_ut.o 00:03:40.909 1 warning generated. 00:03:40.909 LINK spdk_nvme 00:03:40.909 LINK accel_perf 00:03:40.909 CC examples/blob/cli/blobcli.o 00:03:40.909 CC test/unit/lib/rdma/common.c/common_ut.o 00:03:40.909 LINK hello_blob 00:03:40.909 CC app/fio/bdev/fio_plugin.o 00:03:40.909 CC test/accel/dif/dif.o 00:03:40.909 CXX test/cpp_headers/bdev_zone.o 00:03:40.909 LINK pci_ut 00:03:40.909 CC test/unit/lib/util/base64.c/base64_ut.o 00:03:41.166 LINK blobcli 00:03:41.166 CC test/unit/lib/util/bit_array.c/bit_array_ut.o 00:03:41.166 CC examples/nvme/hello_world/hello_world.o 00:03:41.166 LINK dif 00:03:41.166 LINK base64_ut 00:03:41.166 CC test/blobfs/mkfs/mkfs.o 00:03:41.166 LINK spdk_bdev 00:03:41.166 CXX test/cpp_headers/bit_array.o 00:03:41.166 LINK common_ut 00:03:41.166 LINK hello_world 00:03:41.166 LINK memory_ut 00:03:41.166 CC examples/nvme/reconnect/reconnect.o 00:03:41.166 CC test/event/event_perf/event_perf.o 00:03:41.166 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:41.166 LINK mkfs 00:03:41.166 LINK bit_array_ut 00:03:41.166 CC examples/bdev/hello_world/hello_bdev.o 00:03:41.166 CXX test/cpp_headers/bit_pool.o 00:03:41.166 gmake[2]: Nothing to be done for 'all'. 00:03:41.425 LINK event_perf 00:03:41.425 CC test/event/reactor/reactor.o 00:03:41.425 CC test/unit/lib/util/cpuset.c/cpuset_ut.o 00:03:41.425 LINK reconnect 00:03:41.425 CC test/unit/lib/dma/dma.c/dma_ut.o 00:03:41.425 CC examples/bdev/bdevperf/bdevperf.o 00:03:41.425 CC test/unit/lib/util/crc16.c/crc16_ut.o 00:03:41.425 LINK reactor 00:03:41.425 LINK nvme_manage 00:03:41.425 LINK hello_bdev 00:03:41.425 CXX test/cpp_headers/blob.o 00:03:41.425 CC test/unit/lib/ioat/ioat.c/ioat_ut.o 00:03:41.425 LINK cpuset_ut 00:03:41.425 LINK crc16_ut 00:03:41.425 CC test/event/reactor_perf/reactor_perf.o 00:03:41.425 CXX test/cpp_headers/blob_bdev.o 00:03:41.425 CC test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut.o 00:03:41.425 CC examples/nvme/arbitration/arbitration.o 00:03:41.683 CC examples/nvme/hotplug/hotplug.o 00:03:41.683 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:41.683 LINK reactor_perf 00:03:41.683 LINK crc32_ieee_ut 00:03:41.683 LINK dma_ut 00:03:41.683 LINK bdevperf 00:03:41.683 LINK ioat_ut 00:03:41.683 CC test/nvme/aer/aer.o 00:03:41.683 CC test/unit/lib/util/crc32c.c/crc32c_ut.o 00:03:41.683 CC test/nvme/reset/reset.o 00:03:41.683 LINK cmb_copy 00:03:41.683 LINK arbitration 00:03:41.683 LINK hotplug 00:03:41.683 CC examples/nvme/abort/abort.o 00:03:41.683 CXX test/cpp_headers/blobfs.o 00:03:41.683 LINK crc32c_ut 00:03:41.683 CXX test/cpp_headers/blobfs_bdev.o 00:03:41.683 CC test/bdev/bdevio/bdevio.o 00:03:41.683 LINK aer 00:03:41.683 CC test/nvme/sgl/sgl.o 00:03:41.683 LINK reset 00:03:41.683 CC test/unit/lib/util/crc64.c/crc64_ut.o 00:03:41.683 CC test/unit/lib/util/dif.c/dif_ut.o 00:03:41.940 LINK abort 00:03:41.940 CC test/nvme/e2edp/nvme_dp.o 00:03:41.940 CXX test/cpp_headers/conf.o 00:03:41.940 LINK crc64_ut 00:03:41.940 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:41.940 CC test/unit/lib/util/iov.c/iov_ut.o 00:03:41.940 LINK sgl 00:03:41.940 LINK bdevio 00:03:41.940 LINK nvme_dp 00:03:41.940 LINK pmr_persistence 00:03:41.940 CC test/nvme/overhead/overhead.o 00:03:41.940 CC test/nvme/err_injection/err_injection.o 00:03:41.940 CXX test/cpp_headers/config.o 00:03:41.940 LINK iov_ut 00:03:41.940 CXX test/cpp_headers/cpuset.o 00:03:41.940 CC test/unit/lib/util/math.c/math_ut.o 00:03:41.940 CC test/unit/lib/util/pipe.c/pipe_ut.o 00:03:42.197 CC test/unit/lib/util/string.c/string_ut.o 00:03:42.197 LINK overhead 00:03:42.197 LINK err_injection 00:03:42.197 CXX test/cpp_headers/crc16.o 00:03:42.197 CC test/unit/lib/util/xor.c/xor_ut.o 00:03:42.197 LINK dif_ut 00:03:42.197 LINK math_ut 00:03:42.197 CXX test/cpp_headers/crc32.o 00:03:42.197 CXX test/cpp_headers/crc64.o 00:03:42.197 CC examples/nvmf/nvmf/nvmf.o 00:03:42.197 CC test/nvme/startup/startup.o 00:03:42.197 CXX test/cpp_headers/dif.o 00:03:42.197 LINK string_ut 00:03:42.197 CXX test/cpp_headers/dma.o 00:03:42.197 CC test/nvme/reserve/reserve.o 00:03:42.197 LINK pipe_ut 00:03:42.197 LINK xor_ut 00:03:42.197 LINK startup 00:03:42.197 CXX test/cpp_headers/endian.o 00:03:42.197 CC test/nvme/simple_copy/simple_copy.o 00:03:42.454 CXX test/cpp_headers/env.o 00:03:42.454 LINK nvmf 00:03:42.454 LINK reserve 00:03:42.454 CC test/nvme/connect_stress/connect_stress.o 00:03:42.454 CC test/unit/lib/json/json_parse.c/json_parse_ut.o 00:03:42.454 CC test/unit/lib/env_dpdk/pci_event.c/pci_event_ut.o 00:03:42.454 CC test/unit/lib/json/json_util.c/json_util_ut.o 00:03:42.454 LINK simple_copy 00:03:42.454 CC test/unit/lib/json/json_write.c/json_write_ut.o 00:03:42.455 CC test/unit/lib/idxd/idxd_user.c/idxd_user_ut.o 00:03:42.455 LINK connect_stress 00:03:42.455 CC test/unit/lib/idxd/idxd.c/idxd_ut.o 00:03:42.455 CXX test/cpp_headers/env_dpdk.o 00:03:42.455 CC test/nvme/boot_partition/boot_partition.o 00:03:42.455 CXX test/cpp_headers/event.o 00:03:42.455 LINK pci_event_ut 00:03:42.712 CXX test/cpp_headers/fd.o 00:03:42.712 CXX test/cpp_headers/fd_group.o 00:03:42.712 LINK json_util_ut 00:03:42.712 LINK boot_partition 00:03:42.712 LINK idxd_user_ut 00:03:42.712 CC test/nvme/compliance/nvme_compliance.o 00:03:42.712 CXX test/cpp_headers/file.o 00:03:42.712 CC test/nvme/fused_ordering/fused_ordering.o 00:03:42.712 CXX test/cpp_headers/ftl.o 00:03:42.712 LINK idxd_ut 00:03:42.712 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:42.712 CXX test/cpp_headers/gpt_spec.o 00:03:42.712 CC test/nvme/fdp/fdp.o 00:03:42.712 CXX test/cpp_headers/hexlify.o 00:03:42.712 LINK fused_ordering 00:03:42.970 LINK json_write_ut 00:03:42.970 CXX test/cpp_headers/histogram_data.o 00:03:42.970 LINK nvme_compliance 00:03:42.970 LINK doorbell_aers 00:03:42.970 CXX test/cpp_headers/idxd.o 00:03:42.970 CXX test/cpp_headers/idxd_spec.o 00:03:42.970 LINK json_parse_ut 00:03:42.970 CXX test/cpp_headers/init.o 00:03:42.970 LINK fdp 00:03:42.970 CXX test/cpp_headers/ioat.o 00:03:42.970 CXX test/cpp_headers/ioat_spec.o 00:03:42.970 CXX test/cpp_headers/iscsi_spec.o 00:03:42.970 CXX test/cpp_headers/json.o 00:03:42.970 CXX test/cpp_headers/jsonrpc.o 00:03:42.970 CXX test/cpp_headers/keyring.o 00:03:42.970 CC test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut.o 00:03:42.970 CXX test/cpp_headers/keyring_module.o 00:03:43.227 CXX test/cpp_headers/likely.o 00:03:43.227 CXX test/cpp_headers/log.o 00:03:43.227 CXX test/cpp_headers/lvol.o 00:03:43.227 CXX test/cpp_headers/memory.o 00:03:43.227 CXX test/cpp_headers/mmio.o 00:03:43.227 CXX test/cpp_headers/nbd.o 00:03:43.227 CXX test/cpp_headers/notify.o 00:03:43.227 CXX test/cpp_headers/nvme.o 00:03:43.227 CXX test/cpp_headers/nvme_intel.o 00:03:43.227 LINK jsonrpc_server_ut 00:03:43.227 CXX test/cpp_headers/nvme_ocssd.o 00:03:43.227 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:43.227 CXX test/cpp_headers/nvme_spec.o 00:03:43.227 CXX test/cpp_headers/nvme_zns.o 00:03:43.227 CXX test/cpp_headers/nvmf.o 00:03:43.227 CXX test/cpp_headers/nvmf_cmd.o 00:03:43.227 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:43.485 CC test/unit/lib/rpc/rpc.c/rpc_ut.o 00:03:43.485 CXX test/cpp_headers/nvmf_spec.o 00:03:43.485 CXX test/cpp_headers/nvmf_transport.o 00:03:43.485 CXX test/cpp_headers/opal.o 00:03:43.485 CXX test/cpp_headers/opal_spec.o 00:03:43.485 CXX test/cpp_headers/pci_ids.o 00:03:43.485 CXX test/cpp_headers/pipe.o 00:03:43.485 CXX test/cpp_headers/queue.o 00:03:43.485 CXX test/cpp_headers/reduce.o 00:03:43.485 CXX test/cpp_headers/rpc.o 00:03:43.485 CXX test/cpp_headers/scheduler.o 00:03:43.485 CXX test/cpp_headers/scsi.o 00:03:43.485 CXX test/cpp_headers/scsi_spec.o 00:03:43.485 CXX test/cpp_headers/sock.o 00:03:43.485 CXX test/cpp_headers/stdinc.o 00:03:43.485 CXX test/cpp_headers/string.o 00:03:43.485 CXX test/cpp_headers/thread.o 00:03:43.743 LINK rpc_ut 00:03:43.743 CXX test/cpp_headers/trace.o 00:03:43.743 CC test/unit/lib/thread/thread.c/thread_ut.o 00:03:43.743 CXX test/cpp_headers/trace_parser.o 00:03:43.743 CXX test/cpp_headers/tree.o 00:03:43.743 CC test/unit/lib/sock/sock.c/sock_ut.o 00:03:43.743 CXX test/cpp_headers/ublk.o 00:03:43.743 CC test/unit/lib/thread/iobuf.c/iobuf_ut.o 00:03:43.743 CC test/unit/lib/sock/posix.c/posix_ut.o 00:03:43.743 CXX test/cpp_headers/util.o 00:03:43.743 CC test/unit/lib/notify/notify.c/notify_ut.o 00:03:43.743 CXX test/cpp_headers/uuid.o 00:03:43.743 CC test/unit/lib/keyring/keyring.c/keyring_ut.o 00:03:44.001 CXX test/cpp_headers/version.o 00:03:44.001 CXX test/cpp_headers/vfio_user_pci.o 00:03:44.001 CXX test/cpp_headers/vfio_user_spec.o 00:03:44.001 CXX test/cpp_headers/vhost.o 00:03:44.001 CXX test/cpp_headers/vmd.o 00:03:44.001 LINK keyring_ut 00:03:44.001 LINK notify_ut 00:03:44.001 CXX test/cpp_headers/xor.o 00:03:44.001 CXX test/cpp_headers/zipf.o 00:03:44.001 LINK iobuf_ut 00:03:44.259 LINK posix_ut 00:03:44.259 LINK thread_ut 00:03:44.259 LINK sock_ut 00:03:44.259 CC test/unit/lib/init/rpc.c/rpc_ut.o 00:03:44.259 CC test/unit/lib/init/subsystem.c/subsystem_ut.o 00:03:44.259 CC test/unit/lib/accel/accel.c/accel_ut.o 00:03:44.259 CC test/unit/lib/blob/blob_bdev.c/blob_bdev_ut.o 00:03:44.259 CC test/unit/lib/blob/blob.c/blob_ut.o 00:03:44.516 CC test/unit/lib/nvme/nvme.c/nvme_ut.o 00:03:44.516 CC test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut.o 00:03:44.516 CC test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut.o 00:03:44.516 CC test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut.o 00:03:44.516 CC test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut.o 00:03:44.516 LINK rpc_ut 00:03:44.516 LINK subsystem_ut 00:03:44.516 CC test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut.o 00:03:44.516 LINK blob_bdev_ut 00:03:44.775 CC test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut.o 00:03:44.775 CC test/unit/lib/event/app.c/app_ut.o 00:03:45.033 LINK app_ut 00:03:45.033 LINK accel_ut 00:03:45.033 CC test/unit/lib/event/reactor.c/reactor_ut.o 00:03:45.033 LINK nvme_ctrlr_ocssd_cmd_ut 00:03:45.033 LINK nvme_ns_ut 00:03:45.033 CC test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut.o 00:03:45.291 CC test/unit/lib/bdev/part.c/part_ut.o 00:03:45.291 CC test/unit/lib/bdev/bdev.c/bdev_ut.o 00:03:45.291 LINK nvme_ut 00:03:45.291 LINK nvme_ctrlr_cmd_ut 00:03:45.291 CC test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut.o 00:03:45.291 LINK reactor_ut 00:03:45.291 CC test/unit/lib/bdev/gpt/gpt.c/gpt_ut.o 00:03:45.549 CC test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut.o 00:03:45.549 LINK scsi_nvme_ut 00:03:45.549 LINK nvme_ns_ocssd_cmd_ut 00:03:45.549 LINK nvme_ns_cmd_ut 00:03:45.549 CC test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut.o 00:03:45.549 LINK gpt_ut 00:03:45.549 CC test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut.o 00:03:45.549 LINK nvme_ctrlr_ut 00:03:45.549 CC test/unit/lib/bdev/mt/bdev.c/bdev_ut.o 00:03:45.549 CC test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut.o 00:03:45.807 CC test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut.o 00:03:46.065 LINK blob_ut 00:03:46.065 LINK part_ut 00:03:46.065 LINK nvme_pcie_ut 00:03:46.065 CC test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut.o 00:03:46.065 CC test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut.o 00:03:46.065 LINK vbdev_lvol_ut 00:03:46.065 LINK nvme_poll_group_ut 00:03:46.065 CC test/unit/lib/bdev/raid/concat.c/concat_ut.o 00:03:46.065 CC test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut.o 00:03:46.065 LINK nvme_quirks_ut 00:03:46.065 CC test/unit/lib/blobfs/tree.c/tree_ut.o 00:03:46.323 CC test/unit/lib/bdev/raid/raid1.c/raid1_ut.o 00:03:46.323 LINK bdev_raid_ut 00:03:46.323 LINK tree_ut 00:03:46.323 LINK nvme_qpair_ut 00:03:46.323 LINK bdev_zone_ut 00:03:46.323 LINK bdev_raid_sb_ut 00:03:46.323 CC test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut.o 00:03:46.323 CC test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut.o 00:03:46.323 LINK concat_ut 00:03:46.323 CC test/unit/lib/bdev/raid/raid0.c/raid0_ut.o 00:03:46.323 CC test/unit/lib/lvol/lvol.c/lvol_ut.o 00:03:46.323 CC test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut.o 00:03:46.323 CC test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut.o 00:03:46.323 LINK bdev_ut 00:03:46.580 LINK raid1_ut 00:03:46.580 LINK bdev_ut 00:03:46.580 CC test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut.o 00:03:46.581 CC test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut.o 00:03:46.581 CC test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut.o 00:03:46.581 LINK raid0_ut 00:03:46.581 LINK blobfs_bdev_ut 00:03:46.581 CC test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut.o 00:03:46.838 LINK blobfs_async_ut 00:03:46.838 LINK blobfs_sync_ut 00:03:46.838 CC test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut.o 00:03:46.838 CC test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut.o 00:03:46.838 CC test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut.o 00:03:47.095 LINK lvol_ut 00:03:47.096 LINK nvme_transport_ut 00:03:47.096 LINK vbdev_zone_block_ut 00:03:47.096 LINK nvme_io_msg_ut 00:03:47.096 LINK nvme_opal_ut 00:03:47.096 LINK nvme_tcp_ut 00:03:47.355 LINK nvme_pcie_common_ut 00:03:47.355 LINK nvme_fabric_ut 00:03:47.976 LINK nvme_rdma_ut 00:03:47.976 LINK bdev_nvme_ut 00:03:48.234 CC test/unit/lib/nvmf/transport.c/transport_ut.o 00:03:48.234 CC test/unit/lib/nvmf/auth.c/auth_ut.o 00:03:48.234 CC test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut.o 00:03:48.234 CC test/unit/lib/nvmf/subsystem.c/subsystem_ut.o 00:03:48.234 CC test/unit/lib/nvmf/nvmf.c/nvmf_ut.o 00:03:48.234 CC test/unit/lib/nvmf/tcp.c/tcp_ut.o 00:03:48.234 CC test/unit/lib/scsi/dev.c/dev_ut.o 00:03:48.234 CC test/unit/lib/nvmf/ctrlr.c/ctrlr_ut.o 00:03:48.234 CC test/unit/lib/nvmf/rdma.c/rdma_ut.o 00:03:48.234 CC test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut.o 00:03:48.234 LINK dev_ut 00:03:48.492 CC test/unit/lib/scsi/lun.c/lun_ut.o 00:03:48.492 LINK ctrlr_bdev_ut 00:03:48.492 CC test/unit/lib/scsi/scsi.c/scsi_ut.o 00:03:48.492 LINK nvmf_ut 00:03:48.492 LINK auth_ut 00:03:48.492 CC test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut.o 00:03:48.751 LINK ctrlr_discovery_ut 00:03:48.751 CC test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut.o 00:03:48.751 LINK scsi_ut 00:03:48.751 LINK lun_ut 00:03:48.751 LINK transport_ut 00:03:48.751 LINK subsystem_ut 00:03:48.751 LINK rdma_ut 00:03:48.751 LINK ctrlr_ut 00:03:49.009 LINK scsi_pr_ut 00:03:49.009 LINK scsi_bdev_ut 00:03:49.009 LINK tcp_ut 00:03:49.009 CC test/unit/lib/iscsi/init_grp.c/init_grp_ut.o 00:03:49.009 CC test/unit/lib/iscsi/conn.c/conn_ut.o 00:03:49.009 CC test/unit/lib/iscsi/param.c/param_ut.o 00:03:49.009 CC test/unit/lib/iscsi/iscsi.c/iscsi_ut.o 00:03:49.009 CC test/unit/lib/iscsi/portal_grp.c/portal_grp_ut.o 00:03:49.009 CC test/unit/lib/iscsi/tgt_node.c/tgt_node_ut.o 00:03:49.267 LINK param_ut 00:03:49.267 LINK init_grp_ut 00:03:49.525 LINK portal_grp_ut 00:03:49.525 LINK tgt_node_ut 00:03:49.525 LINK conn_ut 00:03:49.784 LINK iscsi_ut 00:03:49.784 00:03:49.784 real 1m3.891s 00:03:49.784 user 4m19.218s 00:03:49.784 sys 0m49.154s 00:03:49.784 ************************************ 00:03:49.784 END TEST unittest_build 00:03:49.784 21:03:01 unittest_build -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:03:49.784 21:03:01 unittest_build -- common/autotest_common.sh@10 -- $ set +x 00:03:49.784 ************************************ 00:03:49.784 21:03:01 -- common/autotest_common.sh@1142 -- $ return 0 00:03:49.784 21:03:01 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:49.784 21:03:01 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:49.784 21:03:01 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:49.784 21:03:01 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:49.784 21:03:01 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:49.784 21:03:01 -- pm/common@44 -- $ pid=1341 00:03:49.784 21:03:01 -- pm/common@50 -- $ kill -TERM 1341 00:03:50.043 21:03:01 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:50.043 21:03:01 -- nvmf/common.sh@7 -- # uname -s 00:03:50.043 21:03:01 -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:03:50.043 21:03:01 -- nvmf/common.sh@7 -- # return 0 00:03:50.043 21:03:01 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:50.043 21:03:01 -- spdk/autotest.sh@32 -- # uname -s 00:03:50.043 21:03:01 -- spdk/autotest.sh@32 -- # '[' FreeBSD = Linux ']' 00:03:50.043 21:03:01 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:50.043 21:03:01 -- pm/common@17 -- # local monitor 00:03:50.043 21:03:01 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:50.043 21:03:01 -- pm/common@25 -- # sleep 1 00:03:50.043 21:03:01 -- pm/common@21 -- # date +%s 00:03:50.043 21:03:01 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1720990981 00:03:50.043 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1720990981_collect-vmstat.pm.log 00:03:51.051 21:03:02 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:51.051 21:03:02 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:51.051 21:03:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:51.051 21:03:02 -- common/autotest_common.sh@10 -- # set +x 00:03:51.051 21:03:02 -- spdk/autotest.sh@59 -- # create_test_list 00:03:51.051 21:03:02 -- common/autotest_common.sh@746 -- # xtrace_disable 00:03:51.051 21:03:02 -- common/autotest_common.sh@10 -- # set +x 00:03:51.051 21:03:02 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:51.051 21:03:02 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:51.051 21:03:02 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:51.051 21:03:02 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:51.051 21:03:02 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:51.051 21:03:02 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:51.051 21:03:02 -- common/autotest_common.sh@1455 -- # uname 00:03:51.051 21:03:02 -- common/autotest_common.sh@1455 -- # '[' FreeBSD = FreeBSD ']' 00:03:51.051 21:03:02 -- common/autotest_common.sh@1456 -- # kldunload contigmem.ko 00:03:51.051 kldunload: can't find file contigmem.ko 00:03:51.051 21:03:02 -- common/autotest_common.sh@1456 -- # true 00:03:51.051 21:03:02 -- common/autotest_common.sh@1457 -- # '[' -n '' ']' 00:03:51.051 21:03:02 -- common/autotest_common.sh@1463 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/modules/ 00:03:51.051 21:03:02 -- common/autotest_common.sh@1464 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/contigmem.ko /boot/kernel/ 00:03:51.051 21:03:02 -- common/autotest_common.sh@1465 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/modules/ 00:03:51.051 21:03:02 -- common/autotest_common.sh@1466 -- # cp -f /home/vagrant/spdk_repo/spdk/dpdk/build/kmod/nic_uio.ko /boot/kernel/ 00:03:51.051 21:03:02 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:51.051 21:03:02 -- common/autotest_common.sh@1475 -- # uname 00:03:51.051 21:03:02 -- common/autotest_common.sh@1475 -- # [[ FreeBSD = FreeBSD ]] 00:03:51.051 21:03:02 -- common/autotest_common.sh@1475 -- # sysctl -n kern.ipc.maxsockbuf 00:03:51.051 21:03:02 -- common/autotest_common.sh@1475 -- # (( 2097152 < 4194304 )) 00:03:51.051 21:03:02 -- common/autotest_common.sh@1476 -- # sysctl kern.ipc.maxsockbuf=4194304 00:03:51.051 kern.ipc.maxsockbuf: 2097152 -> 4194304 00:03:51.051 21:03:02 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:03:51.051 21:03:02 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=clang 00:03:51.051 21:03:02 -- spdk/autotest.sh@72 -- # hash lcov 00:03:51.051 /home/vagrant/spdk_repo/spdk/autotest.sh: line 72: hash: lcov: not found 00:03:51.051 21:03:02 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:03:51.051 21:03:02 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:51.051 21:03:02 -- common/autotest_common.sh@10 -- # set +x 00:03:51.051 21:03:02 -- spdk/autotest.sh@91 -- # rm -f 00:03:51.051 21:03:02 -- spdk/autotest.sh@94 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:51.315 kldunload: can't find file contigmem.ko 00:03:51.315 kldunload: can't find file nic_uio.ko 00:03:51.315 21:03:02 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:03:51.315 21:03:02 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:51.315 21:03:02 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:51.315 21:03:02 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:51.315 21:03:02 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:03:51.315 21:03:02 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:03:51.315 21:03:02 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:03:51.315 21:03:02 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0ns1 00:03:51.315 21:03:02 -- scripts/common.sh@378 -- # local block=/dev/nvme0ns1 pt 00:03:51.315 21:03:02 -- scripts/common.sh@387 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0ns1 00:03:51.315 nvme0ns1 is not a block device 00:03:51.315 21:03:02 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0ns1 00:03:51.315 /home/vagrant/spdk_repo/spdk/scripts/common.sh: line 391: blkid: command not found 00:03:51.315 21:03:02 -- scripts/common.sh@391 -- # pt= 00:03:51.315 21:03:02 -- scripts/common.sh@392 -- # return 1 00:03:51.315 21:03:02 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0ns1 bs=1M count=1 00:03:51.315 1+0 records in 00:03:51.315 1+0 records out 00:03:51.315 1048576 bytes transferred in 0.006264 secs (167405609 bytes/sec) 00:03:51.315 21:03:02 -- spdk/autotest.sh@118 -- # sync 00:03:52.249 21:03:03 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:52.249 21:03:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:52.249 21:03:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:53.181 21:03:04 -- spdk/autotest.sh@124 -- # uname -s 00:03:53.181 21:03:04 -- spdk/autotest.sh@124 -- # '[' FreeBSD = Linux ']' 00:03:53.181 21:03:04 -- spdk/autotest.sh@128 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:53.181 Contigmem (not present) 00:03:53.181 Buffer Size: not set 00:03:53.181 Num Buffers: not set 00:03:53.181 00:03:53.181 00:03:53.181 Type BDF Vendor Device Driver 00:03:53.181 NVMe 0:0:16:0 0x1b36 0x0010 nvme0 00:03:53.181 21:03:04 -- spdk/autotest.sh@130 -- # uname -s 00:03:53.181 21:03:04 -- spdk/autotest.sh@130 -- # [[ FreeBSD == Linux ]] 00:03:53.181 21:03:04 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:53.181 21:03:04 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:53.181 21:03:04 -- common/autotest_common.sh@10 -- # set +x 00:03:53.181 21:03:04 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:53.181 21:03:04 -- common/autotest_common.sh@722 -- # xtrace_disable 00:03:53.181 21:03:04 -- common/autotest_common.sh@10 -- # set +x 00:03:53.181 21:03:04 -- spdk/autotest.sh@139 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:53.181 kldunload: can't find file nic_uio.ko 00:03:53.181 hw.nic_uio.bdfs="0:16:0" 00:03:53.181 hw.contigmem.num_buffers="8" 00:03:53.181 hw.contigmem.buffer_size="268435456" 00:03:53.749 21:03:05 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:53.749 21:03:05 -- common/autotest_common.sh@728 -- # xtrace_disable 00:03:53.749 21:03:05 -- common/autotest_common.sh@10 -- # set +x 00:03:54.008 21:03:05 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:54.008 21:03:05 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:54.008 21:03:05 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:54.008 21:03:05 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:54.008 21:03:05 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:54.008 21:03:05 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:54.008 21:03:05 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:54.008 21:03:05 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:54.008 21:03:05 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:54.008 21:03:05 -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:03:54.008 21:03:05 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:54.008 21:03:05 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:54.008 21:03:05 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:03:54.008 21:03:05 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:54.008 21:03:05 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:03:54.008 cat: /sys/bus/pci/devices/0000:00:10.0/device: No such file or directory 00:03:54.008 21:03:05 -- common/autotest_common.sh@1580 -- # device= 00:03:54.008 21:03:05 -- common/autotest_common.sh@1580 -- # true 00:03:54.008 21:03:05 -- common/autotest_common.sh@1581 -- # [[ '' == \0\x\0\a\5\4 ]] 00:03:54.008 21:03:05 -- common/autotest_common.sh@1586 -- # printf '%s\n' 00:03:54.008 21:03:05 -- common/autotest_common.sh@1592 -- # [[ -z '' ]] 00:03:54.008 21:03:05 -- common/autotest_common.sh@1593 -- # return 0 00:03:54.008 21:03:05 -- spdk/autotest.sh@150 -- # '[' 1 -eq 1 ']' 00:03:54.008 21:03:05 -- spdk/autotest.sh@151 -- # run_test unittest /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:54.008 21:03:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.008 21:03:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.008 21:03:05 -- common/autotest_common.sh@10 -- # set +x 00:03:54.008 ************************************ 00:03:54.008 START TEST unittest 00:03:54.008 ************************************ 00:03:54.008 21:03:05 unittest -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:54.008 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:54.008 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit 00:03:54.008 + testdir=/home/vagrant/spdk_repo/spdk/test/unit 00:03:54.008 +++ dirname /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh 00:03:54.008 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/unit/../.. 00:03:54.008 + rootdir=/home/vagrant/spdk_repo/spdk 00:03:54.008 + source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:03:54.008 ++ rpc_py=rpc_cmd 00:03:54.008 ++ set -e 00:03:54.008 ++ shopt -s nullglob 00:03:54.008 ++ shopt -s extglob 00:03:54.008 ++ shopt -s inherit_errexit 00:03:54.008 ++ '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:03:54.008 ++ [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:03:54.008 ++ source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:03:54.008 +++ CONFIG_WPDK_DIR= 00:03:54.008 +++ CONFIG_ASAN=n 00:03:54.008 +++ CONFIG_VBDEV_COMPRESS=n 00:03:54.008 +++ CONFIG_HAVE_EXECINFO_H=y 00:03:54.008 +++ CONFIG_USDT=n 00:03:54.008 +++ CONFIG_CUSTOMOCF=n 00:03:54.008 +++ CONFIG_PREFIX=/usr/local 00:03:54.008 +++ CONFIG_RBD=n 00:03:54.008 +++ CONFIG_LIBDIR= 00:03:54.008 +++ CONFIG_IDXD=y 00:03:54.008 +++ CONFIG_NVME_CUSE=n 00:03:54.008 +++ CONFIG_SMA=n 00:03:54.008 +++ CONFIG_VTUNE=n 00:03:54.008 +++ CONFIG_TSAN=n 00:03:54.008 +++ CONFIG_RDMA_SEND_WITH_INVAL=y 00:03:54.008 +++ CONFIG_VFIO_USER_DIR= 00:03:54.008 +++ CONFIG_PGO_CAPTURE=n 00:03:54.008 +++ CONFIG_HAVE_UUID_GENERATE_SHA1=n 00:03:54.008 +++ CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:54.008 +++ CONFIG_LTO=n 00:03:54.008 +++ CONFIG_ISCSI_INITIATOR=n 00:03:54.008 +++ CONFIG_CET=n 00:03:54.008 +++ CONFIG_VBDEV_COMPRESS_MLX5=n 00:03:54.008 +++ CONFIG_OCF_PATH= 00:03:54.008 +++ CONFIG_RDMA_SET_TOS=y 00:03:54.008 +++ CONFIG_HAVE_ARC4RANDOM=y 00:03:54.008 +++ CONFIG_HAVE_LIBARCHIVE=n 00:03:54.008 +++ CONFIG_UBLK=n 00:03:54.008 +++ CONFIG_ISAL_CRYPTO=y 00:03:54.008 +++ CONFIG_OPENSSL_PATH= 00:03:54.008 +++ CONFIG_OCF=n 00:03:54.008 +++ CONFIG_FUSE=n 00:03:54.008 +++ CONFIG_VTUNE_DIR= 00:03:54.008 +++ CONFIG_FUZZER_LIB= 00:03:54.008 +++ CONFIG_FUZZER=n 00:03:54.008 +++ CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:03:54.008 +++ CONFIG_CRYPTO=n 00:03:54.008 +++ CONFIG_PGO_USE=n 00:03:54.008 +++ CONFIG_VHOST=n 00:03:54.008 +++ CONFIG_DAOS=n 00:03:54.008 +++ CONFIG_DPDK_INC_DIR= 00:03:54.008 +++ CONFIG_DAOS_DIR= 00:03:54.008 +++ CONFIG_UNIT_TESTS=y 00:03:54.008 +++ CONFIG_RDMA_SET_ACK_TIMEOUT=n 00:03:54.008 +++ CONFIG_VIRTIO=n 00:03:54.008 +++ CONFIG_DPDK_UADK=n 00:03:54.008 +++ CONFIG_COVERAGE=n 00:03:54.008 +++ CONFIG_RDMA=y 00:03:54.008 +++ CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:03:54.008 +++ CONFIG_URING_PATH= 00:03:54.008 +++ CONFIG_XNVME=n 00:03:54.008 +++ CONFIG_VFIO_USER=n 00:03:54.008 +++ CONFIG_ARCH=native 00:03:54.008 +++ CONFIG_HAVE_EVP_MAC=y 00:03:54.008 +++ CONFIG_URING_ZNS=n 00:03:54.008 +++ CONFIG_WERROR=y 00:03:54.008 +++ CONFIG_HAVE_LIBBSD=n 00:03:54.008 +++ CONFIG_UBSAN=n 00:03:54.008 +++ CONFIG_IPSEC_MB_DIR= 00:03:54.008 +++ CONFIG_GOLANG=n 00:03:54.008 +++ CONFIG_ISAL=y 00:03:54.008 +++ CONFIG_IDXD_KERNEL=n 00:03:54.008 +++ CONFIG_DPDK_LIB_DIR= 00:03:54.008 +++ CONFIG_RDMA_PROV=verbs 00:03:54.008 +++ CONFIG_APPS=y 00:03:54.008 +++ CONFIG_SHARED=n 00:03:54.008 +++ CONFIG_HAVE_KEYUTILS=n 00:03:54.008 +++ CONFIG_FC_PATH= 00:03:54.008 +++ CONFIG_DPDK_PKG_CONFIG=n 00:03:54.008 +++ CONFIG_FC=n 00:03:54.008 +++ CONFIG_AVAHI=n 00:03:54.008 +++ CONFIG_FIO_PLUGIN=y 00:03:54.008 +++ CONFIG_RAID5F=n 00:03:54.008 +++ CONFIG_EXAMPLES=y 00:03:54.008 +++ CONFIG_TESTS=y 00:03:54.008 +++ CONFIG_CRYPTO_MLX5=n 00:03:54.008 +++ CONFIG_MAX_LCORES=128 00:03:54.008 +++ CONFIG_IPSEC_MB=n 00:03:54.008 +++ CONFIG_PGO_DIR= 00:03:54.008 +++ CONFIG_DEBUG=y 00:03:54.008 +++ CONFIG_DPDK_COMPRESSDEV=n 00:03:54.008 +++ CONFIG_CROSS_PREFIX= 00:03:54.008 +++ CONFIG_URING=n 00:03:54.008 ++ source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:54.008 +++++ dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:03:54.008 ++++ readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:03:54.008 +++ _root=/home/vagrant/spdk_repo/spdk/test/common 00:03:54.008 +++ _root=/home/vagrant/spdk_repo/spdk 00:03:54.008 +++ _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:03:54.008 +++ _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:03:54.008 +++ _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:03:54.008 +++ VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:03:54.008 +++ ISCSI_APP=("$_app_dir/iscsi_tgt") 00:03:54.008 +++ NVMF_APP=("$_app_dir/nvmf_tgt") 00:03:54.008 +++ VHOST_APP=("$_app_dir/vhost") 00:03:54.008 +++ DD_APP=("$_app_dir/spdk_dd") 00:03:54.008 +++ SPDK_APP=("$_app_dir/spdk_tgt") 00:03:54.008 +++ [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:03:54.008 +++ [[ #ifndef SPDK_CONFIG_H 00:03:54.008 #define SPDK_CONFIG_H 00:03:54.008 #define SPDK_CONFIG_APPS 1 00:03:54.008 #define SPDK_CONFIG_ARCH native 00:03:54.008 #undef SPDK_CONFIG_ASAN 00:03:54.008 #undef SPDK_CONFIG_AVAHI 00:03:54.008 #undef SPDK_CONFIG_CET 00:03:54.008 #undef SPDK_CONFIG_COVERAGE 00:03:54.008 #define SPDK_CONFIG_CROSS_PREFIX 00:03:54.008 #undef SPDK_CONFIG_CRYPTO 00:03:54.009 #undef SPDK_CONFIG_CRYPTO_MLX5 00:03:54.009 #undef SPDK_CONFIG_CUSTOMOCF 00:03:54.009 #undef SPDK_CONFIG_DAOS 00:03:54.009 #define SPDK_CONFIG_DAOS_DIR 00:03:54.009 #define SPDK_CONFIG_DEBUG 1 00:03:54.009 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:03:54.009 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:03:54.009 #define SPDK_CONFIG_DPDK_INC_DIR 00:03:54.009 #define SPDK_CONFIG_DPDK_LIB_DIR 00:03:54.009 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:03:54.009 #undef SPDK_CONFIG_DPDK_UADK 00:03:54.009 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:03:54.009 #define SPDK_CONFIG_EXAMPLES 1 00:03:54.009 #undef SPDK_CONFIG_FC 00:03:54.009 #define SPDK_CONFIG_FC_PATH 00:03:54.009 #define SPDK_CONFIG_FIO_PLUGIN 1 00:03:54.009 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:03:54.009 #undef SPDK_CONFIG_FUSE 00:03:54.009 #undef SPDK_CONFIG_FUZZER 00:03:54.009 #define SPDK_CONFIG_FUZZER_LIB 00:03:54.009 #undef SPDK_CONFIG_GOLANG 00:03:54.009 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:03:54.009 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:03:54.009 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:03:54.009 #undef SPDK_CONFIG_HAVE_KEYUTILS 00:03:54.009 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:03:54.009 #undef SPDK_CONFIG_HAVE_LIBBSD 00:03:54.009 #undef SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 00:03:54.009 #define SPDK_CONFIG_IDXD 1 00:03:54.009 #undef SPDK_CONFIG_IDXD_KERNEL 00:03:54.009 #undef SPDK_CONFIG_IPSEC_MB 00:03:54.009 #define SPDK_CONFIG_IPSEC_MB_DIR 00:03:54.009 #define SPDK_CONFIG_ISAL 1 00:03:54.009 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:03:54.009 #undef SPDK_CONFIG_ISCSI_INITIATOR 00:03:54.009 #define SPDK_CONFIG_LIBDIR 00:03:54.009 #undef SPDK_CONFIG_LTO 00:03:54.009 #define SPDK_CONFIG_MAX_LCORES 128 00:03:54.009 #undef SPDK_CONFIG_NVME_CUSE 00:03:54.009 #undef SPDK_CONFIG_OCF 00:03:54.009 #define SPDK_CONFIG_OCF_PATH 00:03:54.009 #define SPDK_CONFIG_OPENSSL_PATH 00:03:54.009 #undef SPDK_CONFIG_PGO_CAPTURE 00:03:54.009 #define SPDK_CONFIG_PGO_DIR 00:03:54.009 #undef SPDK_CONFIG_PGO_USE 00:03:54.009 #define SPDK_CONFIG_PREFIX /usr/local 00:03:54.009 #undef SPDK_CONFIG_RAID5F 00:03:54.009 #undef SPDK_CONFIG_RBD 00:03:54.009 #define SPDK_CONFIG_RDMA 1 00:03:54.009 #define SPDK_CONFIG_RDMA_PROV verbs 00:03:54.009 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:03:54.009 #undef SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 00:03:54.009 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:03:54.009 #undef SPDK_CONFIG_SHARED 00:03:54.009 #undef SPDK_CONFIG_SMA 00:03:54.009 #define SPDK_CONFIG_TESTS 1 00:03:54.009 #undef SPDK_CONFIG_TSAN 00:03:54.009 #undef SPDK_CONFIG_UBLK 00:03:54.009 #undef SPDK_CONFIG_UBSAN 00:03:54.009 #define SPDK_CONFIG_UNIT_TESTS 1 00:03:54.009 #undef SPDK_CONFIG_URING 00:03:54.009 #define SPDK_CONFIG_URING_PATH 00:03:54.009 #undef SPDK_CONFIG_URING_ZNS 00:03:54.009 #undef SPDK_CONFIG_USDT 00:03:54.009 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:03:54.009 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:03:54.009 #undef SPDK_CONFIG_VFIO_USER 00:03:54.009 #define SPDK_CONFIG_VFIO_USER_DIR 00:03:54.009 #undef SPDK_CONFIG_VHOST 00:03:54.009 #undef SPDK_CONFIG_VIRTIO 00:03:54.009 #undef SPDK_CONFIG_VTUNE 00:03:54.009 #define SPDK_CONFIG_VTUNE_DIR 00:03:54.009 #define SPDK_CONFIG_WERROR 1 00:03:54.009 #define SPDK_CONFIG_WPDK_DIR 00:03:54.009 #undef SPDK_CONFIG_XNVME 00:03:54.009 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:03:54.009 +++ (( SPDK_AUTOTEST_DEBUG_APPS )) 00:03:54.009 ++ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:54.009 +++ [[ -e /bin/wpdk_common.sh ]] 00:03:54.009 +++ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:54.009 +++ source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:54.009 ++++ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:54.009 ++++ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:54.009 ++++ export PATH 00:03:54.009 ++++ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:03:54.009 ++ source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:54.009 +++++ dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:03:54.009 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:54.009 +++ _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:03:54.009 ++++ readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:03:54.009 +++ _pmrootdir=/home/vagrant/spdk_repo/spdk 00:03:54.009 +++ TEST_TAG=N/A 00:03:54.009 +++ TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:03:54.009 +++ PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:03:54.009 ++++ uname -s 00:03:54.009 +++ PM_OS=FreeBSD 00:03:54.009 +++ MONITOR_RESOURCES_SUDO=() 00:03:54.009 +++ declare -A MONITOR_RESOURCES_SUDO 00:03:54.009 +++ MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:03:54.009 +++ MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:03:54.009 +++ MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:03:54.009 +++ MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:03:54.009 +++ SUDO[0]= 00:03:54.009 +++ SUDO[1]='sudo -E' 00:03:54.009 +++ MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:03:54.009 +++ [[ FreeBSD == FreeBSD ]] 00:03:54.009 +++ MONITOR_RESOURCES=(collect-vmstat) 00:03:54.009 +++ [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:03:54.009 ++ : 1 00:03:54.009 ++ export RUN_NIGHTLY 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_AUTOTEST_DEBUG_APPS 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_RUN_VALGRIND 00:03:54.009 ++ : 1 00:03:54.009 ++ export SPDK_RUN_FUNCTIONAL_TEST 00:03:54.009 ++ : 1 00:03:54.009 ++ export SPDK_TEST_UNITTEST 00:03:54.009 ++ : 00:03:54.009 ++ export SPDK_TEST_AUTOBUILD 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_RELEASE_BUILD 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_ISAL 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_ISCSI 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_ISCSI_INITIATOR 00:03:54.009 ++ : 1 00:03:54.009 ++ export SPDK_TEST_NVME 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_NVME_PMR 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_NVME_BP 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_NVME_CLI 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_NVME_CUSE 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_NVME_FDP 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_NVMF 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_VFIOUSER 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_VFIOUSER_QEMU 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_FUZZER 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_FUZZER_SHORT 00:03:54.009 ++ : rdma 00:03:54.009 ++ export SPDK_TEST_NVMF_TRANSPORT 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_RBD 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_VHOST 00:03:54.009 ++ : 1 00:03:54.009 ++ export SPDK_TEST_BLOCKDEV 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_IOAT 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_BLOBFS 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_VHOST_INIT 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_LVOL 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_VBDEV_COMPRESS 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_RUN_ASAN 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_RUN_UBSAN 00:03:54.009 ++ : 00:03:54.009 ++ export SPDK_RUN_EXTERNAL_DPDK 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_RUN_NON_ROOT 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_CRYPTO 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_FTL 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_OCF 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_VMD 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_OPAL 00:03:54.009 ++ : 00:03:54.009 ++ export SPDK_TEST_NATIVE_DPDK 00:03:54.009 ++ : true 00:03:54.009 ++ export SPDK_AUTOTEST_X 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_RAID5 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_URING 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_USDT 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_USE_IGB_UIO 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_SCHEDULER 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_SCANBUILD 00:03:54.009 ++ : 00:03:54.009 ++ export SPDK_TEST_NVMF_NICS 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_SMA 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_DAOS 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_XNVME 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_ACCEL_DSA 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_ACCEL_IAA 00:03:54.009 ++ : 00:03:54.009 ++ export SPDK_TEST_FUZZER_TARGET 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_TEST_NVMF_MDNS 00:03:54.009 ++ : 0 00:03:54.009 ++ export SPDK_JSONRPC_GO_CLIENT 00:03:54.009 ++ export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:03:54.009 ++ SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:03:54.009 ++ export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:54.009 ++ DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:03:54.009 ++ export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:54.009 ++ VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:54.009 ++ export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:54.009 ++ LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:03:54.009 ++ export PCI_BLOCK_SYNC_ON_RESET=yes 00:03:54.009 ++ PCI_BLOCK_SYNC_ON_RESET=yes 00:03:54.009 ++ export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:03:54.009 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:03:54.009 ++ export PYTHONDONTWRITEBYTECODE=1 00:03:54.009 ++ PYTHONDONTWRITEBYTECODE=1 00:03:54.009 ++ export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:54.009 ++ ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:03:54.010 ++ export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:54.010 ++ UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:03:54.010 ++ asan_suppression_file=/var/tmp/asan_suppression_file 00:03:54.010 ++ rm -rf /var/tmp/asan_suppression_file 00:03:54.010 ++ cat 00:03:54.010 ++ echo leak:libfuse3.so 00:03:54.010 ++ export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:54.010 ++ LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:03:54.010 ++ export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:54.010 ++ DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:03:54.010 ++ '[' -z /var/spdk/dependencies ']' 00:03:54.010 ++ export DEPENDENCY_DIR 00:03:54.010 ++ export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:03:54.010 ++ SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:03:54.010 ++ export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:03:54.010 ++ SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:03:54.010 ++ export QEMU_BIN= 00:03:54.010 ++ QEMU_BIN= 00:03:54.010 ++ export 'VFIO_QEMU_BIN=/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:54.010 ++ VFIO_QEMU_BIN='/usr/local/qemu/vfio-user*/bin/qemu-system-x86_64' 00:03:54.010 ++ export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:54.010 ++ AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:03:54.010 ++ export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:54.010 ++ UNBIND_ENTIRE_IOMMU_GROUP=yes 00:03:54.010 ++ '[' 0 -eq 0 ']' 00:03:54.010 ++ export valgrind= 00:03:54.010 ++ valgrind= 00:03:54.010 +++ uname -s 00:03:54.010 ++ '[' FreeBSD = Linux ']' 00:03:54.010 +++ uname -s 00:03:54.010 ++ '[' FreeBSD = FreeBSD ']' 00:03:54.010 ++ MAKE=gmake 00:03:54.010 +++ sysctl -a 00:03:54.010 +++ grep -E -i hw.ncpu 00:03:54.010 +++ awk '{print $2}' 00:03:54.010 ++ MAKEFLAGS=-j10 00:03:54.010 ++ HUGEMEM=2048 00:03:54.010 ++ export HUGEMEM=2048 00:03:54.010 ++ HUGEMEM=2048 00:03:54.010 ++ NO_HUGE=() 00:03:54.010 ++ TEST_MODE= 00:03:54.010 ++ [[ -z '' ]] 00:03:54.010 ++ PYTHONPATH+=:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:54.010 ++ exec 00:03:54.010 ++ PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins 00:03:54.010 ++ /home/vagrant/spdk_repo/spdk/scripts/rpc.py --server 00:03:54.010 ++ set_test_storage 2147483648 00:03:54.010 ++ [[ -v testdir ]] 00:03:54.010 ++ local requested_size=2147483648 00:03:54.010 ++ local mount target_dir 00:03:54.010 ++ local -A mounts fss sizes avails uses 00:03:54.010 ++ local source fs size avail mount use 00:03:54.010 ++ local storage_fallback storage_candidates 00:03:54.010 +++ mktemp -udt spdk.XXXXXX 00:03:54.010 ++ storage_fallback=/tmp/spdk.XXXXXX.Sv3idZWgNA 00:03:54.010 ++ storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:03:54.010 ++ [[ -n '' ]] 00:03:54.010 ++ [[ -n '' ]] 00:03:54.010 ++ mkdir -p /home/vagrant/spdk_repo/spdk/test/unit /tmp/spdk.XXXXXX.Sv3idZWgNA/tests/unit /tmp/spdk.XXXXXX.Sv3idZWgNA 00:03:54.010 ++ requested_size=2214592512 00:03:54.010 ++ read -r source fs size use avail _ mount 00:03:54.010 +++ df -T 00:03:54.010 +++ grep -v Filesystem 00:03:54.010 ++ mounts["$mount"]=/dev/gptid/043e6f36-2a13-11ef-a525-001e676338ce 00:03:54.010 ++ fss["$mount"]=ufs 00:03:54.010 ++ avails["$mount"]=17235210240 00:03:54.010 ++ sizes["$mount"]=31182712832 00:03:54.010 ++ uses["$mount"]=11452887040 00:03:54.010 ++ read -r source fs size use avail _ mount 00:03:54.010 ++ mounts["$mount"]=devfs 00:03:54.010 ++ fss["$mount"]=devfs 00:03:54.010 ++ avails["$mount"]=1024 00:03:54.010 ++ sizes["$mount"]=1024 00:03:54.010 ++ uses["$mount"]=0 00:03:54.010 ++ read -r source fs size use avail _ mount 00:03:54.010 ++ mounts["$mount"]=tmpfs 00:03:54.010 ++ fss["$mount"]=tmpfs 00:03:54.010 ++ avails["$mount"]=2147442688 00:03:54.010 ++ sizes["$mount"]=2147483648 00:03:54.010 ++ uses["$mount"]=40960 00:03:54.010 ++ read -r source fs size use avail _ mount 00:03:54.010 ++ mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/freebsd-vg-autotest/freebsd14-libvirt/output 00:03:54.010 ++ fss["$mount"]=fusefs.sshfs 00:03:54.010 ++ avails["$mount"]=96642408448 00:03:54.010 ++ sizes["$mount"]=105088212992 00:03:54.010 ++ uses["$mount"]=3060371456 00:03:54.010 ++ read -r source fs size use avail _ mount 00:03:54.010 ++ printf '* Looking for test storage...\n' 00:03:54.010 * Looking for test storage... 00:03:54.010 ++ local target_space new_size 00:03:54.010 ++ for target_dir in "${storage_candidates[@]}" 00:03:54.010 +++ df /home/vagrant/spdk_repo/spdk/test/unit 00:03:54.010 +++ awk '$1 !~ /Filesystem/{print $6}' 00:03:54.010 ++ mount=/ 00:03:54.010 ++ target_space=17235210240 00:03:54.010 ++ (( target_space == 0 || target_space < requested_size )) 00:03:54.010 ++ (( target_space >= requested_size )) 00:03:54.010 ++ [[ ufs == tmpfs ]] 00:03:54.010 ++ [[ ufs == ramfs ]] 00:03:54.010 ++ [[ / == / ]] 00:03:54.010 ++ new_size=13667479552 00:03:54.010 ++ (( new_size * 100 / sizes[/] > 95 )) 00:03:54.010 ++ export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:03:54.010 ++ SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/unit 00:03:54.010 ++ printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/unit 00:03:54.010 * Found test storage at /home/vagrant/spdk_repo/spdk/test/unit 00:03:54.010 ++ return 0 00:03:54.010 ++ set -o errtrace 00:03:54.010 ++ shopt -s extdebug 00:03:54.010 ++ trap 'trap - ERR; print_backtrace >&2' ERR 00:03:54.010 ++ PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:03:54.010 21:03:05 unittest -- common/autotest_common.sh@1687 -- # true 00:03:54.010 21:03:05 unittest -- common/autotest_common.sh@1689 -- # xtrace_fd 00:03:54.010 21:03:05 unittest -- common/autotest_common.sh@25 -- # [[ -n '' ]] 00:03:54.010 21:03:05 unittest -- common/autotest_common.sh@29 -- # exec 00:03:54.010 21:03:05 unittest -- common/autotest_common.sh@31 -- # xtrace_restore 00:03:54.010 21:03:05 unittest -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:03:54.010 21:03:05 unittest -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:03:54.010 21:03:05 unittest -- common/autotest_common.sh@18 -- # set -x 00:03:54.010 21:03:05 unittest -- unit/unittest.sh@17 -- # cd /home/vagrant/spdk_repo/spdk 00:03:54.010 21:03:05 unittest -- unit/unittest.sh@153 -- # '[' 0 -eq 1 ']' 00:03:54.010 21:03:05 unittest -- unit/unittest.sh@160 -- # '[' -z x ']' 00:03:54.010 21:03:05 unittest -- unit/unittest.sh@167 -- # '[' 0 -eq 1 ']' 00:03:54.010 21:03:05 unittest -- unit/unittest.sh@180 -- # grep CC_TYPE /home/vagrant/spdk_repo/spdk/mk/cc.mk 00:03:54.010 21:03:05 unittest -- unit/unittest.sh@180 -- # CC_TYPE=CC_TYPE=clang 00:03:54.010 21:03:05 unittest -- unit/unittest.sh@181 -- # hash lcov 00:03:54.010 /home/vagrant/spdk_repo/spdk/test/unit/unittest.sh: line 181: hash: lcov: not found 00:03:54.010 21:03:05 unittest -- unit/unittest.sh@184 -- # cov_avail=no 00:03:54.010 21:03:05 unittest -- unit/unittest.sh@186 -- # '[' no = yes ']' 00:03:54.010 21:03:05 unittest -- unit/unittest.sh@208 -- # uname -m 00:03:54.010 21:03:05 unittest -- unit/unittest.sh@208 -- # '[' amd64 = aarch64 ']' 00:03:54.010 21:03:05 unittest -- unit/unittest.sh@212 -- # run_test unittest_pci_event /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:54.010 21:03:05 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.010 21:03:05 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.010 21:03:05 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:54.010 ************************************ 00:03:54.010 START TEST unittest_pci_event 00:03:54.010 ************************************ 00:03:54.010 21:03:05 unittest.unittest_pci_event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/env_dpdk/pci_event.c/pci_event_ut 00:03:54.270 00:03:54.270 00:03:54.270 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.270 http://cunit.sourceforge.net/ 00:03:54.270 00:03:54.270 00:03:54.270 Suite: pci_event 00:03:54.270 Test: test_pci_parse_event ...passed 00:03:54.270 00:03:54.270 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.270 suites 1 1 n/a 0 0 00:03:54.270 tests 1 1 1 0 0 00:03:54.270 asserts 1 1 1 0 n/a 00:03:54.270 00:03:54.270 Elapsed time = 0.000 seconds 00:03:54.270 00:03:54.270 real 0m0.031s 00:03:54.270 user 0m0.000s 00:03:54.270 sys 0m0.012s 00:03:54.270 21:03:05 unittest.unittest_pci_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.270 ************************************ 00:03:54.270 END TEST unittest_pci_event 00:03:54.270 ************************************ 00:03:54.270 21:03:05 unittest.unittest_pci_event -- common/autotest_common.sh@10 -- # set +x 00:03:54.270 21:03:05 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:54.270 21:03:05 unittest -- unit/unittest.sh@213 -- # run_test unittest_include /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:54.270 21:03:05 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.270 21:03:05 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.270 21:03:05 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:54.270 ************************************ 00:03:54.270 START TEST unittest_include 00:03:54.270 ************************************ 00:03:54.270 21:03:05 unittest.unittest_include -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/include/spdk/histogram_data.h/histogram_ut 00:03:54.270 00:03:54.270 00:03:54.270 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.270 http://cunit.sourceforge.net/ 00:03:54.270 00:03:54.270 00:03:54.270 Suite: histogram 00:03:54.270 Test: histogram_test ...passed 00:03:54.270 Test: histogram_merge ...passed 00:03:54.270 00:03:54.270 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.270 suites 1 1 n/a 0 0 00:03:54.270 tests 2 2 2 0 0 00:03:54.270 asserts 50 50 50 0 n/a 00:03:54.270 00:03:54.270 Elapsed time = 0.000 seconds 00:03:54.270 00:03:54.270 real 0m0.008s 00:03:54.270 user 0m0.000s 00:03:54.270 sys 0m0.008s 00:03:54.270 21:03:05 unittest.unittest_include -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.270 ************************************ 00:03:54.270 END TEST unittest_include 00:03:54.270 ************************************ 00:03:54.270 21:03:05 unittest.unittest_include -- common/autotest_common.sh@10 -- # set +x 00:03:54.270 21:03:05 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:54.270 21:03:05 unittest -- unit/unittest.sh@214 -- # run_test unittest_bdev unittest_bdev 00:03:54.270 21:03:05 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.270 21:03:05 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.270 21:03:05 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:54.270 ************************************ 00:03:54.270 START TEST unittest_bdev 00:03:54.270 ************************************ 00:03:54.270 21:03:05 unittest.unittest_bdev -- common/autotest_common.sh@1123 -- # unittest_bdev 00:03:54.270 21:03:05 unittest.unittest_bdev -- unit/unittest.sh@20 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev.c/bdev_ut 00:03:54.270 00:03:54.270 00:03:54.270 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.270 http://cunit.sourceforge.net/ 00:03:54.270 00:03:54.270 00:03:54.270 Suite: bdev 00:03:54.270 Test: bytes_to_blocks_test ...passed 00:03:54.270 Test: num_blocks_test ...passed 00:03:54.270 Test: io_valid_test ...passed 00:03:54.270 Test: open_write_test ...[2024-07-14 21:03:05.696661] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev1 already claimed: type exclusive_write by module bdev_ut 00:03:54.270 [2024-07-14 21:03:05.696878] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev4 already claimed: type exclusive_write by module bdev_ut 00:03:54.270 [2024-07-14 21:03:05.696901] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev5 already claimed: type exclusive_write by module bdev_ut 00:03:54.270 passed 00:03:54.270 Test: claim_test ...passed 00:03:54.270 Test: alias_add_del_test ...[2024-07-14 21:03:05.699661] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name bdev0 already exists 00:03:54.270 [2024-07-14 21:03:05.699703] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4643:spdk_bdev_alias_add: *ERROR*: Empty alias passed 00:03:54.270 [2024-07-14 21:03:05.699716] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name proper alias 0 already exists 00:03:54.270 passed 00:03:54.270 Test: get_device_stat_test ...passed 00:03:54.270 Test: bdev_io_types_test ...passed 00:03:54.270 Test: bdev_io_wait_test ...passed 00:03:54.270 Test: bdev_io_spans_split_test ...passed 00:03:54.270 Test: bdev_io_boundary_split_test ...passed 00:03:54.270 Test: bdev_io_max_size_and_segment_split_test ...[2024-07-14 21:03:05.706165] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:3208:_bdev_rw_split: *ERROR*: The first child io was less than a block size 00:03:54.270 passed 00:03:54.270 Test: bdev_io_mix_split_test ...passed 00:03:54.270 Test: bdev_io_split_with_io_wait ...passed 00:03:54.270 Test: bdev_io_write_unit_split_test ...[2024-07-14 21:03:05.709877] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:54.270 [2024-07-14 21:03:05.709929] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 31 does not match the write_unit_size 32 00:03:54.270 [2024-07-14 21:03:05.709945] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 1 does not match the write_unit_size 32 00:03:54.270 [2024-07-14 21:03:05.709957] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:2760:bdev_io_do_submit: *ERROR*: IO num_blocks 32 does not match the write_unit_size 64 00:03:54.270 passed 00:03:54.270 Test: bdev_io_alignment_with_boundary ...passed 00:03:54.270 Test: bdev_io_alignment ...passed 00:03:54.270 Test: bdev_histograms ...passed 00:03:54.270 Test: bdev_write_zeroes ...passed 00:03:54.270 Test: bdev_compare_and_write ...passed 00:03:54.270 Test: bdev_compare ...passed 00:03:54.270 Test: bdev_compare_emulated ...passed 00:03:54.270 Test: bdev_zcopy_write ...passed 00:03:54.270 Test: bdev_zcopy_read ...passed 00:03:54.270 Test: bdev_open_while_hotremove ...passed 00:03:54.270 Test: bdev_close_while_hotremove ...passed 00:03:54.270 Test: bdev_open_ext_test ...[2024-07-14 21:03:05.724052] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:54.270 passed 00:03:54.270 Test: bdev_open_ext_unregister ...passed[2024-07-14 21:03:05.724093] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8184:spdk_bdev_open_ext: *ERROR*: Missing event callback function 00:03:54.270 00:03:54.270 Test: bdev_set_io_timeout ...passed 00:03:54.270 Test: bdev_set_qd_sampling ...passed 00:03:54.270 Test: lba_range_overlap ...passed 00:03:54.270 Test: lock_lba_range_check_ranges ...passed 00:03:54.270 Test: lock_lba_range_with_io_outstanding ...passed 00:03:54.270 Test: lock_lba_range_overlapped ...passed 00:03:54.270 Test: bdev_quiesce ...[2024-07-14 21:03:05.730886] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:10107:_spdk_bdev_quiesce: *ERROR*: The range to unquiesce was not found. 00:03:54.270 passed 00:03:54.270 Test: bdev_io_abort ...passed 00:03:54.270 Test: bdev_unmap ...passed 00:03:54.270 Test: bdev_write_zeroes_split_test ...passed 00:03:54.270 Test: bdev_set_options_test ...passed 00:03:54.270 Test: bdev_get_memory_domains ...passed 00:03:54.270 Test: bdev_io_ext ...[2024-07-14 21:03:05.734813] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 502:spdk_bdev_set_opts: *ERROR*: opts_size inside opts cannot be zero value 00:03:54.270 passed 00:03:54.270 Test: bdev_io_ext_no_opts ...passed 00:03:54.270 Test: bdev_io_ext_invalid_opts ...passed 00:03:54.270 Test: bdev_io_ext_split ...passed 00:03:54.270 Test: bdev_io_ext_bounce_buffer ...passed 00:03:54.270 Test: bdev_register_uuid_alias ...[2024-07-14 21:03:05.741582] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name 772c3242-4224-11ef-aa83-81fbc7dfef58 already exists 00:03:54.270 [2024-07-14 21:03:05.741607] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:772c3242-4224-11ef-aa83-81fbc7dfef58 alias for bdev bdev0 00:03:54.270 passed 00:03:54.270 Test: bdev_unregister_by_name ...[2024-07-14 21:03:05.741995] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7974:spdk_bdev_unregister_by_name: *ERROR*: Failed to open bdev with name: bdev1 00:03:54.270 [2024-07-14 21:03:05.742008] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7983:spdk_bdev_unregister_by_name: *ERROR*: Bdev bdev was not registered by the specified module. 00:03:54.270 passed 00:03:54.270 Test: for_each_bdev_test ...passed 00:03:54.270 Test: bdev_seek_test ...passed 00:03:54.270 Test: bdev_copy ...passed 00:03:54.270 Test: bdev_copy_split_test ...passed 00:03:54.270 Test: examine_locks ...passed 00:03:54.270 Test: claim_v2_rwo ...[2024-07-14 21:03:05.745729] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:54.270 [2024-07-14 21:03:05.745748] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8708:claim_verify_rwo: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:54.270 passed 00:03:54.270 Test: claim_v2_rom ...passed 00:03:54.270 Test: claim_v2_rwm ...[2024-07-14 21:03:05.745758] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:54.270 [2024-07-14 21:03:05.745768] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:54.270 [2024-07-14 21:03:05.745776] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:54.270 [2024-07-14 21:03:05.745787] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8704:claim_verify_rwo: *ERROR*: bdev0: key option not supported with read-write-once claims 00:03:54.270 [2024-07-14 21:03:05.745813] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:54.270 [2024-07-14 21:03:05.745823] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:54.270 [2024-07-14 21:03:05.745832] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:54.270 [2024-07-14 21:03:05.745841] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:54.270 [2024-07-14 21:03:05.745851] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8746:claim_verify_rom: *ERROR*: bdev0: key option not supported with read-only-may claims 00:03:54.270 [2024-07-14 21:03:05.745861] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8742:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:54.271 [2024-07-14 21:03:05.745888] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8777:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:54.271 passed 00:03:54.271 Test: claim_v2_existing_writer ...passed 00:03:54.271 Test: claim_v2_existing_v1 ...[2024-07-14 21:03:05.745900] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8078:bdev_open: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:54.271 [2024-07-14 21:03:05.745909] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:54.271 [2024-07-14 21:03:05.745918] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:54.271 [2024-07-14 21:03:05.745926] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:54.271 [2024-07-14 21:03:05.745935] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8796:claim_verify_rwm: *ERROR*: bdev bdev0 already claimed with another key: type read_many_write_many by module bdev_ut 00:03:54.271 [2024-07-14 21:03:05.745946] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8777:claim_verify_rwm: *ERROR*: bdev0: shared_claim_key option required with read-write-may claims 00:03:54.271 [2024-07-14 21:03:05.745968] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8742:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:54.271 [2024-07-14 21:03:05.745977] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8742:claim_verify_rom: *ERROR*: bdev0: Cannot obtain read-only-many claim with writable descriptor 00:03:54.271 [2024-07-14 21:03:05.745997] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:54.271 [2024-07-14 21:03:05.746006] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:54.271 passed 00:03:54.271 Test: claim_v1_existing_v2 ...passed 00:03:54.271 Test: examine_claimed ...passed 00:03:54.271 00:03:54.271 [2024-07-14 21:03:05.746014] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type exclusive_write by module bdev_ut 00:03:54.271 [2024-07-14 21:03:05.746035] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module bdev_ut 00:03:54.271 [2024-07-14 21:03:05.746045] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_many by module bdev_ut 00:03:54.271 [2024-07-14 21:03:05.746055] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8545:spdk_bdev_module_claim_bdev: *ERROR*: bdev bdev0 already claimed: type read_many_write_none by module bdev_ut 00:03:54.271 [2024-07-14 21:03:05.746093] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8873:spdk_bdev_module_claim_bdev_desc: *ERROR*: bdev bdev0 already claimed: type read_many_write_one by module vbdev_ut_examine1 00:03:54.271 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.271 suites 1 1 n/a 0 0 00:03:54.271 tests 59 59 59 0 0 00:03:54.271 asserts 4599 4599 4599 0 n/a 00:03:54.271 00:03:54.271 Elapsed time = 0.055 seconds 00:03:54.271 21:03:05 unittest.unittest_bdev -- unit/unittest.sh@21 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/nvme/bdev_nvme.c/bdev_nvme_ut 00:03:54.271 00:03:54.271 00:03:54.271 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.271 http://cunit.sourceforge.net/ 00:03:54.271 00:03:54.271 00:03:54.271 Suite: nvme 00:03:54.271 Test: test_create_ctrlr ...passed 00:03:54.271 Test: test_reset_ctrlr ...passed 00:03:54.271 Test: test_race_between_reset_and_destruct_ctrlr ...passed 00:03:54.271 Test: test_failover_ctrlr ...passed 00:03:54.271 Test: test_race_between_failover_and_add_secondary_trid ...[2024-07-14 21:03:05.753028] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.753347] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.753376] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 passed 00:03:54.271 Test: test_pending_reset ...[2024-07-14 21:03:05.753393] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 passed 00:03:54.271 Test: test_attach_ctrlr ...passed 00:03:54.271 Test: test_aer_cb ...passed 00:03:54.271 Test: test_submit_nvme_cmd ...passed 00:03:54.271 Test: test_add_remove_trid ...[2024-07-14 21:03:05.753541] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.753566] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.753627] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4317:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:03:54.271 passed 00:03:54.271 Test: test_abort ...passed 00:03:54.271 Test: test_get_io_qpair ...passed 00:03:54.271 Test: test_bdev_unregister ...passed 00:03:54.271 Test: test_compare_ns ...passed 00:03:54.271 Test: test_init_ana_log_page ...passed 00:03:54.271 Test: test_get_memory_domains ...passed 00:03:54.271 Test: test_reconnect_qpair ...passed 00:03:54.271 Test: test_create_bdev_ctrlr ...passed 00:03:54.271 Test: test_add_multi_ns_to_bdev ...[2024-07-14 21:03:05.753884] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:7452:bdev_nvme_comparev_and_writev_done: *ERROR*: Unexpected write success after compare failure. 00:03:54.271 [2024-07-14 21:03:05.754094] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.754139] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:5382:bdev_nvme_check_multipath: *ERROR*: cntlid 18 are duplicated. 00:03:54.271 [2024-07-14 21:03:05.754248] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:4573:nvme_bdev_add_ns: *ERROR*: Namespaces are not identical. 00:03:54.271 passed 00:03:54.271 Test: test_add_multi_io_paths_to_nbdev_ch ...passed 00:03:54.271 Test: test_admin_path ...passed 00:03:54.271 Test: test_reset_bdev_ctrlr ...passed 00:03:54.271 Test: test_find_io_path ...passed 00:03:54.271 Test: test_retry_io_if_ana_state_is_updating ...passed 00:03:54.271 Test: test_retry_io_for_io_path_error ...passed 00:03:54.271 Test: test_retry_io_count ...passed 00:03:54.271 Test: test_concurrent_read_ana_log_page ...passed 00:03:54.271 Test: test_retry_io_for_ana_error ...passed 00:03:54.271 Test: test_check_io_error_resiliency_params ...[2024-07-14 21:03:05.754738] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6076:bdev_nvme_check_io_error_resiliency_params: *ERROR*: ctrlr_loss_timeout_sec can't be less than -1. 00:03:54.271 [2024-07-14 21:03:05.754755] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6080:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:54.271 [2024-07-14 21:03:05.754764] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6089:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be 0 if ctrlr_loss_timeout_sec is not 0. 00:03:54.271 [2024-07-14 21:03:05.754773] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6092:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than ctrlr_loss_timeout_sec. 00:03:54.271 [2024-07-14 21:03:05.754782] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:54.271 [2024-07-14 21:03:05.754791] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6104:bdev_nvme_check_io_error_resiliency_params: *ERROR*: Both reconnect_delay_sec and fast_io_fail_timeout_sec must be 0 if ctrlr_loss_timeout_sec is 0. 00:03:54.271 [2024-07-14 21:03:05.754799] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6084:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io-fail_timeout_sec. 00:03:54.271 passed 00:03:54.271 Test: test_retry_io_if_ctrlr_is_resetting ...passed 00:03:54.271 Test: test_reconnect_ctrlr ...passed 00:03:54.271 Test: test_retry_failover_ctrlr ...passed 00:03:54.271 Test: test_fail_path ...[2024-07-14 21:03:05.754808] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6099:bdev_nvme_check_io_error_resiliency_params: *ERROR*: fast_io_fail_timeout_sec can't be more than ctrlr_loss_timeout_sec. 00:03:54.271 [2024-07-14 21:03:05.754817] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:6096:bdev_nvme_check_io_error_resiliency_params: *ERROR*: reconnect_delay_sec can't be more than fast_io_fail_timeout_sec. 00:03:54.271 [2024-07-14 21:03:05.754900] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.754918] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.754949] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.754963] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.754977] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.755017] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 passed 00:03:54.271 Test: test_nvme_ns_cmp ...passed 00:03:54.271 Test: test_ana_transition ...passed 00:03:54.271 Test: test_set_preferred_path ...passed 00:03:54.271 Test: test_find_next_io_path ...passed 00:03:54.271 Test: test_find_io_path_min_qd ...passed 00:03:54.271 Test: test_disable_auto_failback ...passed 00:03:54.271 Test: test_set_multipath_policy ...passed 00:03:54.271 Test: test_uuid_generation ...[2024-07-14 21:03:05.755065] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.755082] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.755096] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.755109] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.755122] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.755253] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 passed 00:03:54.271 Test: test_retry_io_to_same_path ...passed 00:03:54.271 Test: test_race_between_reset_and_disconnected ...passed 00:03:54.271 Test: test_ctrlr_op_rpc ...passed 00:03:54.271 Test: test_bdev_ctrlr_op_rpc ...passed 00:03:54.271 Test: test_disable_enable_ctrlr ...passed 00:03:54.271 Test: test_delete_ctrlr_done ...passed 00:03:54.271 Test: test_ns_remove_during_reset ...passed 00:03:54.271 Test: test_io_path_is_current ...passed 00:03:54.271 00:03:54.271 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.271 suites 1 1 n/a 0 0 00:03:54.271 tests 49 49 49 0 0 00:03:54.271 asserts 3577 3577 3577 0 n/a 00:03:54.271 00:03:54.271 Elapsed time = 0.016 seconds 00:03:54.271 [2024-07-14 21:03:05.806446] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 [2024-07-14 21:03:05.806497] /home/vagrant/spdk_repo/spdk/module/bdev/nvme/bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:03:54.271 21:03:05 unittest.unittest_bdev -- unit/unittest.sh@22 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid.c/bdev_raid_ut 00:03:54.271 00:03:54.271 00:03:54.271 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.272 http://cunit.sourceforge.net/ 00:03:54.272 00:03:54.272 Test Options 00:03:54.272 blocklen = 4096, strip_size = 64, max_io_size = 1024, g_max_base_drives = 32, g_max_raids = 2 00:03:54.272 00:03:54.272 Suite: raid 00:03:54.272 Test: test_create_raid ...passed 00:03:54.272 Test: test_create_raid_superblock ...passed 00:03:54.272 Test: test_delete_raid ...passed 00:03:54.272 Test: test_create_raid_invalid_args ...[2024-07-14 21:03:05.813699] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1481:_raid_bdev_create: *ERROR*: Unsupported raid level '-1' 00:03:54.272 [2024-07-14 21:03:05.813867] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1475:_raid_bdev_create: *ERROR*: Invalid strip size 1231 00:03:54.272 passed 00:03:54.272 Test: test_delete_raid_invalid_args ...passed 00:03:54.272 Test: test_io_channel ...[2024-07-14 21:03:05.813929] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:1465:_raid_bdev_create: *ERROR*: Duplicate raid bdev name found: raid1 00:03:54.272 [2024-07-14 21:03:05.813981] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:54.272 [2024-07-14 21:03:05.813989] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:03:54.272 [2024-07-14 21:03:05.814132] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3193:raid_bdev_configure_base_bdev: *ERROR*: Unable to claim this bdev as it is already claimed 00:03:54.272 [2024-07-14 21:03:05.814143] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid.c:3369:raid_bdev_add_base_bdev: *ERROR*: base bdev 'Nvme0n1' configure failed: (null) 00:03:54.272 passed 00:03:54.272 Test: test_reset_io ...passed 00:03:54.272 Test: test_multi_raid ...passed 00:03:54.272 Test: test_io_type_supported ...passed 00:03:54.272 Test: test_raid_json_dump_info ...passed 00:03:54.272 Test: test_context_size ...passed 00:03:54.272 Test: test_raid_level_conversions ...passed 00:03:54.272 Test: test_raid_io_split ...passed 00:03:54.272 Test: test_raid_process ...passed 00:03:54.272 00:03:54.272 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.272 suites 1 1 n/a 0 0 00:03:54.272 tests 14 14 14 0 0 00:03:54.272 asserts 6183 6183 6183 0 n/a 00:03:54.272 00:03:54.272 Elapsed time = 0.000 seconds 00:03:54.532 21:03:05 unittest.unittest_bdev -- unit/unittest.sh@23 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/bdev_raid_sb.c/bdev_raid_sb_ut 00:03:54.532 00:03:54.532 00:03:54.532 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.532 http://cunit.sourceforge.net/ 00:03:54.532 00:03:54.532 00:03:54.532 Suite: raid_sb 00:03:54.532 Test: test_raid_bdev_write_superblock ...passed 00:03:54.532 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:54.532 Test: test_raid_bdev_parse_superblock ...[2024-07-14 21:03:05.819413] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:54.532 passed 00:03:54.532 Suite: raid_sb_md 00:03:54.532 Test: test_raid_bdev_write_superblock ...passed 00:03:54.532 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:54.532 Test: test_raid_bdev_parse_superblock ...passed[2024-07-14 21:03:05.819557] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:54.532 00:03:54.532 Suite: raid_sb_md_interleaved 00:03:54.532 Test: test_raid_bdev_write_superblock ...passed 00:03:54.532 Test: test_raid_bdev_load_base_bdev_superblock ...passed 00:03:54.532 Test: test_raid_bdev_parse_superblock ...passed 00:03:54.532 00:03:54.532 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.532 suites 3 3 n/a 0 0 00:03:54.532 tests 9 9 9 0 0 00:03:54.532 asserts 139 139 139 0 n/a 00:03:54.532 00:03:54.532 Elapsed time = 0.000 seconds 00:03:54.532 [2024-07-14 21:03:05.819622] /home/vagrant/spdk_repo/spdk/module/bdev/raid/bdev_raid_sb.c: 166:raid_bdev_parse_superblock: *ERROR*: Not supported superblock major version 9999 on bdev test_bdev 00:03:54.532 21:03:05 unittest.unittest_bdev -- unit/unittest.sh@24 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/concat.c/concat_ut 00:03:54.532 00:03:54.532 00:03:54.532 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.532 http://cunit.sourceforge.net/ 00:03:54.532 00:03:54.532 00:03:54.532 Suite: concat 00:03:54.532 Test: test_concat_start ...passed 00:03:54.532 Test: test_concat_rw ...passed 00:03:54.532 Test: test_concat_null_payload ...passed 00:03:54.532 00:03:54.532 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.532 suites 1 1 n/a 0 0 00:03:54.532 tests 3 3 3 0 0 00:03:54.532 asserts 8460 8460 8460 0 n/a 00:03:54.532 00:03:54.532 Elapsed time = 0.000 seconds 00:03:54.532 21:03:05 unittest.unittest_bdev -- unit/unittest.sh@25 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid0.c/raid0_ut 00:03:54.532 00:03:54.532 00:03:54.532 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.532 http://cunit.sourceforge.net/ 00:03:54.532 00:03:54.532 00:03:54.532 Suite: raid0 00:03:54.532 Test: test_write_io ...passed 00:03:54.532 Test: test_read_io ...passed 00:03:54.532 Test: test_unmap_io ...passed 00:03:54.532 Test: test_io_failure ...passed 00:03:54.532 Suite: raid0_dif 00:03:54.532 Test: test_write_io ...passed 00:03:54.532 Test: test_read_io ...passed 00:03:54.532 Test: test_unmap_io ...passed 00:03:54.532 Test: test_io_failure ...passed 00:03:54.532 00:03:54.532 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.532 suites 2 2 n/a 0 0 00:03:54.532 tests 8 8 8 0 0 00:03:54.532 asserts 368291 368291 368291 0 n/a 00:03:54.532 00:03:54.532 Elapsed time = 0.016 seconds 00:03:54.532 21:03:05 unittest.unittest_bdev -- unit/unittest.sh@26 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/raid/raid1.c/raid1_ut 00:03:54.532 00:03:54.532 00:03:54.532 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.532 http://cunit.sourceforge.net/ 00:03:54.532 00:03:54.532 00:03:54.532 Suite: raid1 00:03:54.532 Test: test_raid1_start ...passed 00:03:54.532 Test: test_raid1_read_balancing ...passed 00:03:54.532 Test: test_raid1_write_error ...passed 00:03:54.532 Test: test_raid1_read_error ...passed 00:03:54.532 00:03:54.532 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.532 suites 1 1 n/a 0 0 00:03:54.532 tests 4 4 4 0 0 00:03:54.532 asserts 4374 4374 4374 0 n/a 00:03:54.532 00:03:54.532 Elapsed time = 0.000 seconds 00:03:54.532 21:03:05 unittest.unittest_bdev -- unit/unittest.sh@27 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/bdev_zone.c/bdev_zone_ut 00:03:54.532 00:03:54.532 00:03:54.532 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.532 http://cunit.sourceforge.net/ 00:03:54.532 00:03:54.532 00:03:54.532 Suite: zone 00:03:54.532 Test: test_zone_get_operation ...passed 00:03:54.532 Test: test_bdev_zone_get_info ...passed 00:03:54.532 Test: test_bdev_zone_management ...passed 00:03:54.532 Test: test_bdev_zone_append ...passed 00:03:54.532 Test: test_bdev_zone_append_with_md ...passed 00:03:54.532 Test: test_bdev_zone_appendv ...passed 00:03:54.532 Test: test_bdev_zone_appendv_with_md ...passed 00:03:54.532 Test: test_bdev_io_get_append_location ...passed 00:03:54.532 00:03:54.532 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.532 suites 1 1 n/a 0 0 00:03:54.532 tests 8 8 8 0 0 00:03:54.532 asserts 94 94 94 0 n/a 00:03:54.532 00:03:54.533 Elapsed time = 0.000 seconds 00:03:54.533 21:03:05 unittest.unittest_bdev -- unit/unittest.sh@28 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/gpt/gpt.c/gpt_ut 00:03:54.533 00:03:54.533 00:03:54.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.533 http://cunit.sourceforge.net/ 00:03:54.533 00:03:54.533 00:03:54.533 Suite: gpt_parse 00:03:54.533 Test: test_parse_mbr_and_primary ...[2024-07-14 21:03:05.860475] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:54.533 [2024-07-14 21:03:05.860757] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:54.533 [2024-07-14 21:03:05.860821] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:54.533 [2024-07-14 21:03:05.860839] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:54.533 [2024-07-14 21:03:05.860858] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:54.533 [2024-07-14 21:03:05.860873] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:54.533 passed 00:03:54.533 Test: test_parse_secondary ...[2024-07-14 21:03:05.861106] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=1633771873 00:03:54.533 [2024-07-14 21:03:05.861122] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 279:gpt_parse_partition_table: *ERROR*: Failed to read gpt header 00:03:54.533 [2024-07-14 21:03:05.861139] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=1633771873 which exceeds max=128 00:03:54.533 [2024-07-14 21:03:05.861153] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 285:gpt_parse_partition_table: *ERROR*: Failed to read gpt partitions 00:03:54.533 passed 00:03:54.533 Test: test_check_mbr ...[2024-07-14 21:03:05.861385] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:54.533 passed 00:03:54.533 Test: test_read_header ...[2024-07-14 21:03:05.861401] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 259:gpt_parse_mbr: *ERROR*: Gpt and the related buffer should not be NULL 00:03:54.533 passed 00:03:54.533 Test: test_read_partitions ...[2024-07-14 21:03:05.861425] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 165:gpt_read_header: *ERROR*: head_size=600 00:03:54.533 [2024-07-14 21:03:05.861442] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 178:gpt_read_header: *ERROR*: head crc32 does not match, provided=584158336, calculated=3316781438 00:03:54.533 [2024-07-14 21:03:05.861462] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 184:gpt_read_header: *ERROR*: signature did not match 00:03:54.533 [2024-07-14 21:03:05.861478] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 192:gpt_read_header: *ERROR*: head my_lba(7016996765293437281) != expected(1) 00:03:54.533 [2024-07-14 21:03:05.861495] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 136:gpt_lba_range_check: *ERROR*: Head's usable_lba_end(7016996765293437281) > lba_end(0) 00:03:54.533 [2024-07-14 21:03:05.861513] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 197:gpt_read_header: *ERROR*: lba range check error 00:03:54.533 [2024-07-14 21:03:05.861536] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 89:gpt_read_partitions: *ERROR*: Num_partition_entries=256 which exceeds max=128 00:03:54.533 [2024-07-14 21:03:05.861551] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 96:gpt_read_partitions: *ERROR*: Partition_entry_size(0) != expected(80) 00:03:54.533 [2024-07-14 21:03:05.861566] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 59:gpt_get_partitions_buf: *ERROR*: Buffer size is not enough 00:03:54.533 [2024-07-14 21:03:05.861580] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 105:gpt_read_partitions: *ERROR*: Failed to get gpt partitions buf 00:03:54.533 passed 00:03:54.533 00:03:54.533 [2024-07-14 21:03:05.861697] /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.c: 113:gpt_read_partitions: *ERROR*: GPT partition entry array crc32 did not match 00:03:54.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.533 suites 1 1 n/a 0 0 00:03:54.533 tests 5 5 5 0 0 00:03:54.533 asserts 33 33 33 0 n/a 00:03:54.533 00:03:54.533 Elapsed time = 0.000 seconds 00:03:54.533 21:03:05 unittest.unittest_bdev -- unit/unittest.sh@29 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/part.c/part_ut 00:03:54.533 00:03:54.533 00:03:54.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.533 http://cunit.sourceforge.net/ 00:03:54.533 00:03:54.533 00:03:54.533 Suite: bdev_part 00:03:54.533 Test: part_test ...[2024-07-14 21:03:05.871414] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:4613:bdev_name_add: *ERROR*: Bdev name d015021c-0e0a-495d-b6d4-169a1f011283 already exists 00:03:54.533 [2024-07-14 21:03:05.871672] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:7722:bdev_register: *ERROR*: Unable to add uuid:d015021c-0e0a-495d-b6d4-169a1f011283 alias for bdev test1 00:03:54.533 passed 00:03:54.533 Test: part_free_test ...passed 00:03:54.533 Test: part_get_io_channel_test ...passed 00:03:54.533 Test: part_construct_ext ...passed 00:03:54.533 00:03:54.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.533 suites 1 1 n/a 0 0 00:03:54.533 tests 4 4 4 0 0 00:03:54.533 asserts 48 48 48 0 n/a 00:03:54.533 00:03:54.533 Elapsed time = 0.000 seconds 00:03:54.533 21:03:05 unittest.unittest_bdev -- unit/unittest.sh@30 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/scsi_nvme.c/scsi_nvme_ut 00:03:54.533 00:03:54.533 00:03:54.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.533 http://cunit.sourceforge.net/ 00:03:54.533 00:03:54.533 00:03:54.533 Suite: scsi_nvme_suite 00:03:54.533 Test: scsi_nvme_translate_test ...passed 00:03:54.533 00:03:54.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.533 suites 1 1 n/a 0 0 00:03:54.533 tests 1 1 1 0 0 00:03:54.533 asserts 104 104 104 0 n/a 00:03:54.533 00:03:54.533 Elapsed time = 0.000 seconds 00:03:54.533 21:03:05 unittest.unittest_bdev -- unit/unittest.sh@31 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_lvol.c/vbdev_lvol_ut 00:03:54.533 00:03:54.533 00:03:54.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.533 http://cunit.sourceforge.net/ 00:03:54.533 00:03:54.533 00:03:54.533 Suite: lvol 00:03:54.533 Test: ut_lvs_init ...passed 00:03:54.533 Test: ut_lvol_init ...passed 00:03:54.533 Test: ut_lvol_snapshot ...passed 00:03:54.533 Test: ut_lvol_clone ...passed 00:03:54.533 Test: ut_lvs_destroy ...passed 00:03:54.533 Test: ut_lvs_unload ...passed 00:03:54.533 Test: ut_lvol_resize ...[2024-07-14 21:03:05.887323] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 180:_vbdev_lvs_create_cb: *ERROR*: Cannot create lvol store bdev 00:03:54.533 [2024-07-14 21:03:05.887578] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 264:vbdev_lvs_create: *ERROR*: Cannot create blobstore device 00:03:54.533 [2024-07-14 21:03:05.887718] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1394:vbdev_lvol_resize: *ERROR*: lvol does not exist 00:03:54.533 passed 00:03:54.533 Test: ut_lvol_set_read_only ...passed 00:03:54.533 Test: ut_lvol_hotremove ...passed 00:03:54.533 Test: ut_vbdev_lvol_get_io_channel ...passed 00:03:54.533 Test: ut_vbdev_lvol_io_type_supported ...passed 00:03:54.533 Test: ut_lvol_read_write ...passed 00:03:54.533 Test: ut_vbdev_lvol_submit_request ...passed 00:03:54.533 Test: ut_lvol_examine_config ...passed 00:03:54.533 Test: ut_lvol_examine_disk ...passed 00:03:54.533 Test: ut_lvol_rename ...passed 00:03:54.533 Test: ut_bdev_finish ...passed 00:03:54.533 Test: ut_lvs_rename ...passed 00:03:54.533 Test: ut_lvol_seek ...passed 00:03:54.533 Test: ut_esnap_dev_create ...[2024-07-14 21:03:05.887833] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1536:_vbdev_lvs_examine_finish: *ERROR*: Error opening lvol UNIT_TEST_UUID 00:03:54.533 [2024-07-14 21:03:05.887892] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c: 105:_vbdev_lvol_change_bdev_alias: *ERROR*: cannot add alias 'lvs/new_lvol_name' 00:03:54.533 [2024-07-14 21:03:05.887908] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1344:vbdev_lvol_rename: *ERROR*: renaming lvol to 'new_lvol_name' does not succeed 00:03:54.533 [2024-07-14 21:03:05.887964] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1879:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : NULL esnap ID 00:03:54.533 [2024-07-14 21:03:05.887979] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1885:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID length (36) 00:03:54.533 passed 00:03:54.533 Test: ut_lvol_esnap_clone_bad_args ...passed 00:03:54.533 Test: ut_lvol_shallow_copy ...[2024-07-14 21:03:05.887995] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1890:vbdev_lvol_esnap_dev_create: *ERROR*: lvol : Invalid esnap ID: not a UUID 00:03:54.533 [2024-07-14 21:03:05.888037] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1280:vbdev_lvol_create_bdev_clone: *ERROR*: lvol store not specified 00:03:54.533 [2024-07-14 21:03:05.888055] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1287:vbdev_lvol_create_bdev_clone: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:03:54.533 [2024-07-14 21:03:05.888090] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1977:vbdev_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:03:54.533 [2024-07-14 21:03:05.888104] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:1982:vbdev_lvol_shallow_copy: *ERROR*: lvol lvol_sc, bdev name must not be NULL 00:03:54.533 passed 00:03:54.533 Test: ut_lvol_set_external_parent ...passed 00:03:54.533 00:03:54.533 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.533 suites 1 1 n/a 0 0 00:03:54.533 tests 23 23 23 0 0 00:03:54.533 asserts 770 770 770 0 n/a 00:03:54.533 00:03:54.533 Elapsed time = 0.000 seconds 00:03:54.533 [2024-07-14 21:03:05.888150] /home/vagrant/spdk_repo/spdk/module/bdev/lvol/vbdev_lvol.c:2037:vbdev_lvol_set_external_parent: *ERROR*: bdev '255f4236-9427-42d0-a9f1-aa17f37dd8db' could not be opened: error -19 00:03:54.533 21:03:05 unittest.unittest_bdev -- unit/unittest.sh@32 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/vbdev_zone_block.c/vbdev_zone_block_ut 00:03:54.533 00:03:54.533 00:03:54.533 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.533 http://cunit.sourceforge.net/ 00:03:54.533 00:03:54.533 00:03:54.533 Suite: zone_block 00:03:54.533 Test: test_zone_block_create ...passed 00:03:54.533 Test: test_zone_block_create_invalid ...[2024-07-14 21:03:05.896761] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 624:zone_block_insert_name: *ERROR*: base bdev Nvme0n1 already claimed 00:03:54.533 [2024-07-14 21:03:05.896929] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File existspassed 00:03:54.533 Test: test_get_zone_info ...[2024-07-14 21:03:05.896960] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 721:zone_block_register: *ERROR*: Base bdev zone_dev1 is already a zoned bdev 00:03:54.533 [2024-07-14 21:03:05.896967] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: File exists[2024-07-14 21:03:05.896976] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 860:vbdev_zone_block_create: *ERROR*: Zone capacity can't be 0 00:03:54.533 [2024-07-14 21:03:05.896982] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-14 21:03:05.896989] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 865:vbdev_zone_block_create: *ERROR*: Optimal open zones can't be 0 00:03:54.533 [2024-07-14 21:03:05.896995] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block_rpc.c: 58:rpc_zone_block_create: *ERROR*: Failed to create block zoned vbdev: Invalid argument[2024-07-14 21:03:05.897079] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.897093] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 passed 00:03:54.533 Test: test_supported_io_types ...passed 00:03:54.533 Test: test_reset_zone ...passed 00:03:54.533 Test: test_open_zone ...[2024-07-14 21:03:05.897102] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.897159] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.897168] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.897193] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 passed 00:03:54.533 Test: test_zone_write ...[2024-07-14 21:03:05.897435] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.897444] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.897471] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:54.533 [2024-07-14 21:03:05.897478] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.897487] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:54.533 [2024-07-14 21:03:05.897493] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.897941] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x407, wp 0x405) 00:03:54.533 [2024-07-14 21:03:05.897961] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.897970] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 402:zone_block_write: *ERROR*: Trying to write to zone with invalid address (lba 0x400, wp 0x405) 00:03:54.533 [2024-07-14 21:03:05.897977] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 passed 00:03:54.533 Test: test_zone_read ...[2024-07-14 21:03:05.898575] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:54.533 [2024-07-14 21:03:05.898602] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.898628] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x4ff8, len 0x10) 00:03:54.533 [2024-07-14 21:03:05.898635] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 passed 00:03:54.533 Test: test_close_zone ...passed 00:03:54.533 Test: test_finish_zone ...[2024-07-14 21:03:05.898644] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 460:zone_block_read: *ERROR*: Trying to read from invalid zone (lba 0x5000) 00:03:54.533 [2024-07-14 21:03:05.898651] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.898689] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 465:zone_block_read: *ERROR*: Read exceeds zone capacity (lba 0x3f8, len 0x10) 00:03:54.533 [2024-07-14 21:03:05.898696] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.898716] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.898727] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.898760] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.898768] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 passed 00:03:54.533 Test: test_append_zone ...[2024-07-14 21:03:05.898814] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.533 [2024-07-14 21:03:05.898823] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.534 [2024-07-14 21:03:05.898845] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 391:zone_block_write: *ERROR*: Trying to write to zone in invalid state 2 00:03:54.534 [2024-07-14 21:03:05.898851] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.534 [2024-07-14 21:03:05.898860] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 378:zone_block_write: *ERROR*: Trying to write to invalid zone (lba 0x5000) 00:03:54.534 [2024-07-14 21:03:05.898866] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.534 passed 00:03:54.534 00:03:54.534 [2024-07-14 21:03:05.899909] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 411:zone_block_write: *ERROR*: Write exceeds zone capacity (lba 0x3f0, len 0x20, wp 0x3f0) 00:03:54.534 [2024-07-14 21:03:05.899922] /home/vagrant/spdk_repo/spdk/module/bdev/zone_block/vbdev_zone_block.c: 510:zone_block_submit_request: *ERROR*: ERROR on bdev_io submission! 00:03:54.534 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.534 suites 1 1 n/a 0 0 00:03:54.534 tests 11 11 11 0 0 00:03:54.534 asserts 3437 3437 3437 0 n/a 00:03:54.534 00:03:54.534 Elapsed time = 0.008 seconds 00:03:54.534 21:03:05 unittest.unittest_bdev -- unit/unittest.sh@33 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/bdev/mt/bdev.c/bdev_ut 00:03:54.534 00:03:54.534 00:03:54.534 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.534 http://cunit.sourceforge.net/ 00:03:54.534 00:03:54.534 00:03:54.534 Suite: bdev 00:03:54.534 Test: basic ...[2024-07-14 21:03:05.908514] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b269): Operation not permitted (rc=-1) 00:03:54.534 [2024-07-14 21:03:05.908720] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device 0x3f460f66a480 (0x24b260): Operation not permitted (rc=-1) 00:03:54.534 [2024-07-14 21:03:05.908734] thread.c:2374:spdk_get_io_channel: *ERROR*: could not create io_channel for io_device bdev_ut_bdev (0x24b269): Operation not permitted (rc=-1) 00:03:54.534 passed 00:03:54.534 Test: unregister_and_close ...passed 00:03:54.534 Test: unregister_and_close_different_threads ...passed 00:03:54.534 Test: basic_qos ...passed 00:03:54.534 Test: put_channel_during_reset ...passed 00:03:54.534 Test: aborted_reset ...passed 00:03:54.534 Test: aborted_reset_no_outstanding_io ...passed 00:03:54.534 Test: io_during_reset ...passed 00:03:54.534 Test: reset_completions ...passed 00:03:54.534 Test: io_during_qos_queue ...passed 00:03:54.534 Test: io_during_qos_reset ...passed 00:03:54.534 Test: enomem ...passed 00:03:54.534 Test: enomem_multi_bdev ...passed 00:03:54.534 Test: enomem_multi_bdev_unregister ...passed 00:03:54.534 Test: enomem_multi_io_target ...passed 00:03:54.534 Test: qos_dynamic_enable ...passed 00:03:54.534 Test: bdev_histograms_mt ...passed 00:03:54.534 Test: bdev_set_io_timeout_mt ...[2024-07-14 21:03:05.939962] thread.c: 471:spdk_thread_lib_fini: *ERROR*: io_device 0x3f460f66a600 not unregistered 00:03:54.534 passed 00:03:54.534 Test: lock_lba_range_then_submit_io ...[2024-07-14 21:03:05.940988] thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x24b248 already registered (old:0x3f460f66a600 new:0x3f460f66a780) 00:03:54.534 passed 00:03:54.534 Test: unregister_during_reset ...passed 00:03:54.534 Test: event_notify_and_close ...passed 00:03:54.534 Test: unregister_and_qos_poller ...passed 00:03:54.534 Suite: bdev_wrong_thread 00:03:54.534 Test: spdk_bdev_register_wt ...passed 00:03:54.534 Test: spdk_bdev_examine_wt ...[2024-07-14 21:03:05.946833] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c:8503:spdk_bdev_register: *ERROR*: Cannot register bdev wt_bdev on thread 0x3f460f633380 (0x3f460f633380) 00:03:54.534 [2024-07-14 21:03:05.946883] /home/vagrant/spdk_repo/spdk/lib/bdev/bdev.c: 811:spdk_bdev_examine: *ERROR*: Cannot examine bdev ut_bdev_wt on thread 0x3f460f633380 (0x3f460f633380) 00:03:54.534 passed 00:03:54.534 00:03:54.534 Run Summary: Type Total Ran Passed Failed Inactive 00:03:54.534 suites 2 2 n/a 0 0 00:03:54.534 tests 24 24 24 0 0 00:03:54.534 asserts 621 621 621 0 n/a 00:03:54.534 00:03:54.534 Elapsed time = 0.039 seconds 00:03:54.534 00:03:54.534 real 0m0.261s 00:03:54.534 user 0m0.138s 00:03:54.534 sys 0m0.087s 00:03:54.534 21:03:05 unittest.unittest_bdev -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:54.534 21:03:05 unittest.unittest_bdev -- common/autotest_common.sh@10 -- # set +x 00:03:54.534 ************************************ 00:03:54.534 END TEST unittest_bdev 00:03:54.534 ************************************ 00:03:54.534 21:03:05 unittest -- common/autotest_common.sh@1142 -- # return 0 00:03:54.534 21:03:05 unittest -- unit/unittest.sh@215 -- # grep -q '#define SPDK_CONFIG_CRYPTO 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:54.534 21:03:05 unittest -- unit/unittest.sh@220 -- # grep -q '#define SPDK_CONFIG_VBDEV_COMPRESS 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:54.534 21:03:05 unittest -- unit/unittest.sh@225 -- # grep -q '#define SPDK_CONFIG_DPDK_COMPRESSDEV 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:54.534 21:03:05 unittest -- unit/unittest.sh@229 -- # grep -q '#define SPDK_CONFIG_RAID5F 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:03:54.534 21:03:05 unittest -- unit/unittest.sh@233 -- # run_test unittest_blob_blobfs unittest_blob 00:03:54.534 21:03:05 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:54.534 21:03:05 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:54.534 21:03:05 unittest -- common/autotest_common.sh@10 -- # set +x 00:03:54.534 ************************************ 00:03:54.534 START TEST unittest_blob_blobfs 00:03:54.534 ************************************ 00:03:54.534 21:03:06 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1123 -- # unittest_blob 00:03:54.534 21:03:06 unittest.unittest_blob_blobfs -- unit/unittest.sh@39 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut ]] 00:03:54.534 21:03:06 unittest.unittest_blob_blobfs -- unit/unittest.sh@40 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob.c/blob_ut 00:03:54.534 00:03:54.534 00:03:54.534 CUnit - A unit testing framework for C - Version 2.1-3 00:03:54.534 http://cunit.sourceforge.net/ 00:03:54.534 00:03:54.534 00:03:54.534 Suite: blob_nocopy_noextent 00:03:54.534 Test: blob_init ...[2024-07-14 21:03:06.008656] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:54.534 passed 00:03:54.534 Test: blob_thin_provision ...passed 00:03:54.534 Test: blob_read_only ...passed 00:03:54.793 Test: bs_load ...[2024-07-14 21:03:06.086404] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:54.793 passed 00:03:54.793 Test: bs_load_custom_cluster_size ...passed 00:03:54.793 Test: bs_load_after_failed_grow ...passed 00:03:54.793 Test: bs_cluster_sz ...[2024-07-14 21:03:06.109668] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:54.793 [2024-07-14 21:03:06.109741] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:54.793 [2024-07-14 21:03:06.109758] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:54.793 passed 00:03:54.793 Test: bs_resize_md ...passed 00:03:54.793 Test: bs_destroy ...passed 00:03:54.793 Test: bs_type ...passed 00:03:54.793 Test: bs_super_block ...passed 00:03:54.793 Test: bs_test_recover_cluster_count ...passed 00:03:54.793 Test: bs_grow_live ...passed 00:03:54.793 Test: bs_grow_live_no_space ...passed 00:03:54.793 Test: bs_test_grow ...passed 00:03:54.793 Test: blob_serialize_test ...passed 00:03:54.793 Test: super_block_crc ...passed 00:03:54.793 Test: blob_thin_prov_write_count_io ...passed 00:03:54.793 Test: blob_thin_prov_unmap_cluster ...passed 00:03:54.793 Test: bs_load_iter_test ...passed 00:03:54.793 Test: blob_relations ...[2024-07-14 21:03:06.249649] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:54.793 [2024-07-14 21:03:06.249698] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:54.793 [2024-07-14 21:03:06.249782] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:54.793 [2024-07-14 21:03:06.249792] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:54.793 passed 00:03:54.793 Test: blob_relations2 ...[2024-07-14 21:03:06.262180] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:54.793 [2024-07-14 21:03:06.262278] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:54.793 [2024-07-14 21:03:06.262304] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:54.793 [2024-07-14 21:03:06.262310] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:54.793 [2024-07-14 21:03:06.262429] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:54.793 [2024-07-14 21:03:06.262439] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:54.793 [2024-07-14 21:03:06.262471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:54.793 [2024-07-14 21:03:06.262484] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:54.793 passed 00:03:54.793 Test: blob_relations3 ...passed 00:03:55.052 Test: blobstore_clean_power_failure ...passed 00:03:55.052 Test: blob_delete_snapshot_power_failure ...[2024-07-14 21:03:06.404136] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:55.052 [2024-07-14 21:03:06.414801] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:55.052 [2024-07-14 21:03:06.414835] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:55.052 [2024-07-14 21:03:06.414843] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:55.052 [2024-07-14 21:03:06.426270] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:55.052 [2024-07-14 21:03:06.426305] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:55.052 [2024-07-14 21:03:06.426313] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:55.052 [2024-07-14 21:03:06.426320] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:55.052 [2024-07-14 21:03:06.438578] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:55.052 [2024-07-14 21:03:06.438628] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:55.052 [2024-07-14 21:03:06.451192] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:55.052 [2024-07-14 21:03:06.451246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:55.052 [2024-07-14 21:03:06.462947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:55.052 [2024-07-14 21:03:06.463057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:55.052 passed 00:03:55.052 Test: blob_create_snapshot_power_failure ...[2024-07-14 21:03:06.493908] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:55.052 [2024-07-14 21:03:06.514387] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:03:55.052 [2024-07-14 21:03:06.525194] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:55.052 passed 00:03:55.052 Test: blob_io_unit ...passed 00:03:55.052 Test: blob_io_unit_compatibility ...passed 00:03:55.052 Test: blob_ext_md_pages ...passed 00:03:55.311 Test: blob_esnap_io_4096_4096 ...passed 00:03:55.311 Test: blob_esnap_io_512_512 ...passed 00:03:55.311 Test: blob_esnap_io_4096_512 ...passed 00:03:55.311 Test: blob_esnap_io_512_4096 ...passed 00:03:55.311 Test: blob_esnap_clone_resize ...passed 00:03:55.311 Suite: blob_bs_nocopy_noextent 00:03:55.311 Test: blob_open ...passed 00:03:55.311 Test: blob_create ...[2024-07-14 21:03:06.740571] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:55.311 passed 00:03:55.311 Test: blob_create_loop ...passed 00:03:55.312 Test: blob_create_fail ...[2024-07-14 21:03:06.815218] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:55.312 passed 00:03:55.312 Test: blob_create_internal ...passed 00:03:55.571 Test: blob_create_zero_extent ...passed 00:03:55.571 Test: blob_snapshot ...passed 00:03:55.571 Test: blob_clone ...passed 00:03:55.571 Test: blob_inflate ...[2024-07-14 21:03:06.968912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:55.571 passed 00:03:55.571 Test: blob_delete ...passed 00:03:55.571 Test: blob_resize_test ...[2024-07-14 21:03:07.027764] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:55.571 passed 00:03:55.571 Test: blob_resize_thin_test ...passed 00:03:55.571 Test: channel_ops ...passed 00:03:55.830 Test: blob_super ...passed 00:03:55.830 Test: blob_rw_verify_iov ...passed 00:03:55.830 Test: blob_unmap ...passed 00:03:55.830 Test: blob_iter ...passed 00:03:55.830 Test: blob_parse_md ...passed 00:03:55.830 Test: bs_load_pending_removal ...passed 00:03:55.830 Test: bs_unload ...[2024-07-14 21:03:07.304678] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:55.830 passed 00:03:55.830 Test: bs_usable_clusters ...passed 00:03:55.830 Test: blob_crc ...[2024-07-14 21:03:07.360934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:55.830 [2024-07-14 21:03:07.360997] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:55.830 passed 00:03:56.089 Test: blob_flags ...passed 00:03:56.089 Test: bs_version ...passed 00:03:56.089 Test: blob_set_xattrs_test ...[2024-07-14 21:03:07.453532] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:56.089 [2024-07-14 21:03:07.453591] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:56.089 passed 00:03:56.089 Test: blob_thin_prov_alloc ...passed 00:03:56.089 Test: blob_insert_cluster_msg_test ...passed 00:03:56.089 Test: blob_thin_prov_rw ...passed 00:03:56.089 Test: blob_thin_prov_rle ...passed 00:03:56.089 Test: blob_thin_prov_rw_iov ...passed 00:03:56.348 Test: blob_snapshot_rw ...passed 00:03:56.348 Test: blob_snapshot_rw_iov ...passed 00:03:56.348 Test: blob_inflate_rw ...passed 00:03:56.348 Test: blob_snapshot_freeze_io ...passed 00:03:56.348 Test: blob_operation_split_rw ...passed 00:03:56.348 Test: blob_operation_split_rw_iov ...passed 00:03:56.607 Test: blob_simultaneous_operations ...[2024-07-14 21:03:07.911255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:56.607 [2024-07-14 21:03:07.911325] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.607 [2024-07-14 21:03:07.911593] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:56.607 [2024-07-14 21:03:07.911603] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.607 [2024-07-14 21:03:07.915134] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:56.607 [2024-07-14 21:03:07.915173] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.607 [2024-07-14 21:03:07.915189] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:56.607 [2024-07-14 21:03:07.915212] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:56.607 passed 00:03:56.607 Test: blob_persist_test ...passed 00:03:56.607 Test: blob_decouple_snapshot ...passed 00:03:56.607 Test: blob_seek_io_unit ...passed 00:03:56.607 Test: blob_nested_freezes ...passed 00:03:56.607 Test: blob_clone_resize ...passed 00:03:56.607 Test: blob_shallow_copy ...[2024-07-14 21:03:08.118772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:56.607 [2024-07-14 21:03:08.118845] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:56.607 [2024-07-14 21:03:08.118855] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:56.607 passed 00:03:56.607 Suite: blob_blob_nocopy_noextent 00:03:56.866 Test: blob_write ...passed 00:03:56.866 Test: blob_read ...passed 00:03:56.866 Test: blob_rw_verify ...passed 00:03:56.866 Test: blob_rw_verify_iov_nomem ...passed 00:03:56.866 Test: blob_rw_iov_read_only ...passed 00:03:56.866 Test: blob_xattr ...passed 00:03:56.866 Test: blob_dirty_shutdown ...passed 00:03:56.866 Test: blob_is_degraded ...passed 00:03:56.866 Suite: blob_esnap_bs_nocopy_noextent 00:03:56.866 Test: blob_esnap_create ...passed 00:03:57.125 Test: blob_esnap_thread_add_remove ...passed 00:03:57.125 Test: blob_esnap_clone_snapshot ...passed 00:03:57.125 Test: blob_esnap_clone_inflate ...passed 00:03:57.125 Test: blob_esnap_clone_decouple ...passed 00:03:57.125 Test: blob_esnap_clone_reload ...passed 00:03:57.125 Test: blob_esnap_hotplug ...passed 00:03:57.125 Test: blob_set_parent ...[2024-07-14 21:03:08.590337] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:57.125 [2024-07-14 21:03:08.590404] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:57.125 [2024-07-14 21:03:08.590423] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:57.125 [2024-07-14 21:03:08.590431] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:57.125 [2024-07-14 21:03:08.590474] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:57.125 passed 00:03:57.125 Test: blob_set_external_parent ...[2024-07-14 21:03:08.622684] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:57.125 [2024-07-14 21:03:08.622727] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:57.125 [2024-07-14 21:03:08.622751] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:57.125 [2024-07-14 21:03:08.622790] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:57.125 passed 00:03:57.125 Suite: blob_nocopy_extent 00:03:57.125 Test: blob_init ...[2024-07-14 21:03:08.634173] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:57.125 passed 00:03:57.125 Test: blob_thin_provision ...passed 00:03:57.125 Test: blob_read_only ...passed 00:03:57.384 Test: bs_load ...[2024-07-14 21:03:08.674646] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:57.384 passed 00:03:57.384 Test: bs_load_custom_cluster_size ...passed 00:03:57.384 Test: bs_load_after_failed_grow ...passed 00:03:57.384 Test: bs_cluster_sz ...[2024-07-14 21:03:08.695188] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:57.384 [2024-07-14 21:03:08.695258] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:57.384 [2024-07-14 21:03:08.695270] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:57.384 passed 00:03:57.384 Test: bs_resize_md ...passed 00:03:57.384 Test: bs_destroy ...passed 00:03:57.384 Test: bs_type ...passed 00:03:57.384 Test: bs_super_block ...passed 00:03:57.384 Test: bs_test_recover_cluster_count ...passed 00:03:57.384 Test: bs_grow_live ...passed 00:03:57.384 Test: bs_grow_live_no_space ...passed 00:03:57.384 Test: bs_test_grow ...passed 00:03:57.384 Test: blob_serialize_test ...passed 00:03:57.384 Test: super_block_crc ...passed 00:03:57.384 Test: blob_thin_prov_write_count_io ...passed 00:03:57.384 Test: blob_thin_prov_unmap_cluster ...passed 00:03:57.384 Test: bs_load_iter_test ...passed 00:03:57.384 Test: blob_relations ...[2024-07-14 21:03:08.836017] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:57.384 [2024-07-14 21:03:08.836067] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.384 [2024-07-14 21:03:08.836185] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:57.384 [2024-07-14 21:03:08.836195] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.384 passed 00:03:57.384 Test: blob_relations2 ...[2024-07-14 21:03:08.848880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:57.384 [2024-07-14 21:03:08.848933] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.384 [2024-07-14 21:03:08.848957] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:57.384 [2024-07-14 21:03:08.848964] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.384 [2024-07-14 21:03:08.849076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:57.384 [2024-07-14 21:03:08.849086] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.384 [2024-07-14 21:03:08.849118] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:57.384 [2024-07-14 21:03:08.849125] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.384 passed 00:03:57.384 Test: blob_relations3 ...passed 00:03:57.644 Test: blobstore_clean_power_failure ...passed 00:03:57.644 Test: blob_delete_snapshot_power_failure ...[2024-07-14 21:03:08.996906] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:57.644 [2024-07-14 21:03:09.007921] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:57.644 [2024-07-14 21:03:09.018988] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:57.644 [2024-07-14 21:03:09.019034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:57.644 [2024-07-14 21:03:09.019057] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.644 [2024-07-14 21:03:09.031515] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:57.644 [2024-07-14 21:03:09.031545] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:57.644 [2024-07-14 21:03:09.031553] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:57.644 [2024-07-14 21:03:09.031560] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.644 [2024-07-14 21:03:09.044929] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:57.644 [2024-07-14 21:03:09.044990] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:03:57.644 [2024-07-14 21:03:09.045013] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:03:57.644 [2024-07-14 21:03:09.045020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.644 [2024-07-14 21:03:09.058203] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:03:57.644 [2024-07-14 21:03:09.058280] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.644 [2024-07-14 21:03:09.070660] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:03:57.644 [2024-07-14 21:03:09.070772] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.644 [2024-07-14 21:03:09.085880] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:03:57.644 [2024-07-14 21:03:09.085955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:57.644 passed 00:03:57.644 Test: blob_create_snapshot_power_failure ...[2024-07-14 21:03:09.119397] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:03:57.644 [2024-07-14 21:03:09.130446] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:03:57.644 [2024-07-14 21:03:09.151274] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:03:57.644 [2024-07-14 21:03:09.162695] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:03:57.904 passed 00:03:57.904 Test: blob_io_unit ...passed 00:03:57.904 Test: blob_io_unit_compatibility ...passed 00:03:57.904 Test: blob_ext_md_pages ...passed 00:03:57.904 Test: blob_esnap_io_4096_4096 ...passed 00:03:57.904 Test: blob_esnap_io_512_512 ...passed 00:03:57.904 Test: blob_esnap_io_4096_512 ...passed 00:03:57.904 Test: blob_esnap_io_512_4096 ...passed 00:03:57.904 Test: blob_esnap_clone_resize ...passed 00:03:57.904 Suite: blob_bs_nocopy_extent 00:03:57.904 Test: blob_open ...passed 00:03:57.904 Test: blob_create ...[2024-07-14 21:03:09.375983] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:03:57.904 passed 00:03:57.904 Test: blob_create_loop ...passed 00:03:57.904 Test: blob_create_fail ...[2024-07-14 21:03:09.445022] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:58.163 passed 00:03:58.163 Test: blob_create_internal ...passed 00:03:58.163 Test: blob_create_zero_extent ...passed 00:03:58.163 Test: blob_snapshot ...passed 00:03:58.163 Test: blob_clone ...passed 00:03:58.163 Test: blob_inflate ...[2024-07-14 21:03:09.590242] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:03:58.163 passed 00:03:58.163 Test: blob_delete ...passed 00:03:58.163 Test: blob_resize_test ...[2024-07-14 21:03:09.645076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:03:58.163 passed 00:03:58.163 Test: blob_resize_thin_test ...passed 00:03:58.423 Test: channel_ops ...passed 00:03:58.423 Test: blob_super ...passed 00:03:58.423 Test: blob_rw_verify_iov ...passed 00:03:58.423 Test: blob_unmap ...passed 00:03:58.423 Test: blob_iter ...passed 00:03:58.423 Test: blob_parse_md ...passed 00:03:58.423 Test: bs_load_pending_removal ...passed 00:03:58.423 Test: bs_unload ...[2024-07-14 21:03:09.915475] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:03:58.423 passed 00:03:58.423 Test: bs_usable_clusters ...passed 00:03:58.682 Test: blob_crc ...[2024-07-14 21:03:09.973122] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:58.682 [2024-07-14 21:03:09.973187] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:03:58.682 passed 00:03:58.682 Test: blob_flags ...passed 00:03:58.682 Test: bs_version ...passed 00:03:58.682 Test: blob_set_xattrs_test ...[2024-07-14 21:03:10.067954] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:58.682 [2024-07-14 21:03:10.068028] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:03:58.682 passed 00:03:58.682 Test: blob_thin_prov_alloc ...passed 00:03:58.682 Test: blob_insert_cluster_msg_test ...passed 00:03:58.682 Test: blob_thin_prov_rw ...passed 00:03:58.682 Test: blob_thin_prov_rle ...passed 00:03:58.940 Test: blob_thin_prov_rw_iov ...passed 00:03:58.940 Test: blob_snapshot_rw ...passed 00:03:58.940 Test: blob_snapshot_rw_iov ...passed 00:03:58.940 Test: blob_inflate_rw ...passed 00:03:58.940 Test: blob_snapshot_freeze_io ...passed 00:03:58.940 Test: blob_operation_split_rw ...passed 00:03:59.198 Test: blob_operation_split_rw_iov ...passed 00:03:59.198 Test: blob_simultaneous_operations ...[2024-07-14 21:03:10.526228] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:59.198 [2024-07-14 21:03:10.526281] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:59.198 [2024-07-14 21:03:10.526642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:59.198 [2024-07-14 21:03:10.526652] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:59.198 [2024-07-14 21:03:10.531946] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:59.198 [2024-07-14 21:03:10.531972] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:59.198 [2024-07-14 21:03:10.531989] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:03:59.198 [2024-07-14 21:03:10.531996] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:59.198 passed 00:03:59.198 Test: blob_persist_test ...passed 00:03:59.198 Test: blob_decouple_snapshot ...passed 00:03:59.198 Test: blob_seek_io_unit ...passed 00:03:59.198 Test: blob_nested_freezes ...passed 00:03:59.198 Test: blob_clone_resize ...passed 00:03:59.198 Test: blob_shallow_copy ...[2024-07-14 21:03:10.740200] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:03:59.198 [2024-07-14 21:03:10.740291] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:03:59.198 [2024-07-14 21:03:10.740301] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:03:59.457 passed 00:03:59.457 Suite: blob_blob_nocopy_extent 00:03:59.457 Test: blob_write ...passed 00:03:59.457 Test: blob_read ...passed 00:03:59.457 Test: blob_rw_verify ...passed 00:03:59.457 Test: blob_rw_verify_iov_nomem ...passed 00:03:59.457 Test: blob_rw_iov_read_only ...passed 00:03:59.457 Test: blob_xattr ...passed 00:03:59.457 Test: blob_dirty_shutdown ...passed 00:03:59.457 Test: blob_is_degraded ...passed 00:03:59.457 Suite: blob_esnap_bs_nocopy_extent 00:03:59.717 Test: blob_esnap_create ...passed 00:03:59.717 Test: blob_esnap_thread_add_remove ...passed 00:03:59.717 Test: blob_esnap_clone_snapshot ...passed 00:03:59.717 Test: blob_esnap_clone_inflate ...passed 00:03:59.717 Test: blob_esnap_clone_decouple ...passed 00:03:59.717 Test: blob_esnap_clone_reload ...passed 00:03:59.717 Test: blob_esnap_hotplug ...passed 00:03:59.717 Test: blob_set_parent ...[2024-07-14 21:03:11.224753] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:03:59.717 [2024-07-14 21:03:11.224851] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:03:59.717 [2024-07-14 21:03:11.224872] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:03:59.717 [2024-07-14 21:03:11.224881] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:03:59.717 [2024-07-14 21:03:11.224934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:59.717 passed 00:03:59.717 Test: blob_set_external_parent ...[2024-07-14 21:03:11.258899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:03:59.717 [2024-07-14 21:03:11.258958] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:03:59.717 [2024-07-14 21:03:11.258981] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:03:59.717 [2024-07-14 21:03:11.259023] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:03:59.976 passed 00:03:59.976 Suite: blob_copy_noextent 00:03:59.976 Test: blob_init ...[2024-07-14 21:03:11.270678] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:03:59.976 passed 00:03:59.976 Test: blob_thin_provision ...passed 00:03:59.976 Test: blob_read_only ...passed 00:03:59.976 Test: bs_load ...[2024-07-14 21:03:11.315947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:03:59.976 passed 00:03:59.976 Test: bs_load_custom_cluster_size ...passed 00:03:59.976 Test: bs_load_after_failed_grow ...passed 00:03:59.976 Test: bs_cluster_sz ...[2024-07-14 21:03:11.336767] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:03:59.976 [2024-07-14 21:03:11.336842] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:03:59.976 [2024-07-14 21:03:11.336857] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:03:59.976 passed 00:03:59.976 Test: bs_resize_md ...passed 00:03:59.976 Test: bs_destroy ...passed 00:03:59.976 Test: bs_type ...passed 00:03:59.976 Test: bs_super_block ...passed 00:03:59.976 Test: bs_test_recover_cluster_count ...passed 00:03:59.976 Test: bs_grow_live ...passed 00:03:59.976 Test: bs_grow_live_no_space ...passed 00:03:59.976 Test: bs_test_grow ...passed 00:03:59.976 Test: blob_serialize_test ...passed 00:03:59.976 Test: super_block_crc ...passed 00:03:59.976 Test: blob_thin_prov_write_count_io ...passed 00:03:59.976 Test: blob_thin_prov_unmap_cluster ...passed 00:03:59.976 Test: bs_load_iter_test ...passed 00:03:59.976 Test: blob_relations ...[2024-07-14 21:03:11.482052] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:59.976 [2024-07-14 21:03:11.482142] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:59.976 [2024-07-14 21:03:11.482246] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:59.976 [2024-07-14 21:03:11.482255] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:59.976 passed 00:03:59.976 Test: blob_relations2 ...[2024-07-14 21:03:11.494594] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:59.976 [2024-07-14 21:03:11.494625] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:59.976 [2024-07-14 21:03:11.494634] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:59.976 [2024-07-14 21:03:11.494640] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:59.976 [2024-07-14 21:03:11.494899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:59.976 [2024-07-14 21:03:11.494912] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:59.976 [2024-07-14 21:03:11.494947] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:03:59.976 [2024-07-14 21:03:11.494955] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:03:59.976 passed 00:03:59.976 Test: blob_relations3 ...passed 00:04:00.235 Test: blobstore_clean_power_failure ...passed 00:04:00.235 Test: blob_delete_snapshot_power_failure ...[2024-07-14 21:03:11.643341] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:04:00.235 [2024-07-14 21:03:11.655076] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:04:00.235 [2024-07-14 21:03:11.655126] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:04:00.235 [2024-07-14 21:03:11.655150] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:00.235 [2024-07-14 21:03:11.667043] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:04:00.235 [2024-07-14 21:03:11.667089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:04:00.235 [2024-07-14 21:03:11.667113] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:04:00.235 [2024-07-14 21:03:11.667120] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:00.235 [2024-07-14 21:03:11.679642] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:04:00.235 [2024-07-14 21:03:11.679682] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:00.235 [2024-07-14 21:03:11.692608] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:04:00.236 [2024-07-14 21:03:11.692644] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:00.236 [2024-07-14 21:03:11.705063] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:04:00.236 [2024-07-14 21:03:11.705110] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:00.236 passed 00:04:00.236 Test: blob_create_snapshot_power_failure ...[2024-07-14 21:03:11.738516] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:04:00.236 [2024-07-14 21:03:11.760574] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 1 read failed for blobid 0x100000001: -5 00:04:00.236 [2024-07-14 21:03:11.772286] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:04:00.495 passed 00:04:00.495 Test: blob_io_unit ...passed 00:04:00.495 Test: blob_io_unit_compatibility ...passed 00:04:00.495 Test: blob_ext_md_pages ...passed 00:04:00.495 Test: blob_esnap_io_4096_4096 ...passed 00:04:00.495 Test: blob_esnap_io_512_512 ...passed 00:04:00.495 Test: blob_esnap_io_4096_512 ...passed 00:04:00.495 Test: blob_esnap_io_512_4096 ...passed 00:04:00.495 Test: blob_esnap_clone_resize ...passed 00:04:00.495 Suite: blob_bs_copy_noextent 00:04:00.495 Test: blob_open ...passed 00:04:00.495 Test: blob_create ...[2024-07-14 21:03:11.986303] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:04:00.495 passed 00:04:00.495 Test: blob_create_loop ...passed 00:04:00.754 Test: blob_create_fail ...[2024-07-14 21:03:12.060045] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:00.754 passed 00:04:00.754 Test: blob_create_internal ...passed 00:04:00.754 Test: blob_create_zero_extent ...passed 00:04:00.754 Test: blob_snapshot ...passed 00:04:00.754 Test: blob_clone ...passed 00:04:00.754 Test: blob_inflate ...[2024-07-14 21:03:12.210884] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:04:00.754 passed 00:04:00.754 Test: blob_delete ...passed 00:04:00.754 Test: blob_resize_test ...[2024-07-14 21:03:12.275762] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:04:00.754 passed 00:04:01.012 Test: blob_resize_thin_test ...passed 00:04:01.012 Test: channel_ops ...passed 00:04:01.012 Test: blob_super ...passed 00:04:01.012 Test: blob_rw_verify_iov ...passed 00:04:01.012 Test: blob_unmap ...passed 00:04:01.012 Test: blob_iter ...passed 00:04:01.012 Test: blob_parse_md ...passed 00:04:01.012 Test: bs_load_pending_removal ...passed 00:04:01.012 Test: bs_unload ...[2024-07-14 21:03:12.544134] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:04:01.012 passed 00:04:01.270 Test: bs_usable_clusters ...passed 00:04:01.271 Test: blob_crc ...[2024-07-14 21:03:12.606210] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:04:01.271 [2024-07-14 21:03:12.606276] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:04:01.271 passed 00:04:01.271 Test: blob_flags ...passed 00:04:01.271 Test: bs_version ...passed 00:04:01.271 Test: blob_set_xattrs_test ...[2024-07-14 21:03:12.697264] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:01.271 [2024-07-14 21:03:12.697328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:01.271 passed 00:04:01.271 Test: blob_thin_prov_alloc ...passed 00:04:01.271 Test: blob_insert_cluster_msg_test ...passed 00:04:01.271 Test: blob_thin_prov_rw ...passed 00:04:01.529 Test: blob_thin_prov_rle ...passed 00:04:01.529 Test: blob_thin_prov_rw_iov ...passed 00:04:01.529 Test: blob_snapshot_rw ...passed 00:04:01.529 Test: blob_snapshot_rw_iov ...passed 00:04:01.529 Test: blob_inflate_rw ...passed 00:04:01.529 Test: blob_snapshot_freeze_io ...passed 00:04:01.803 Test: blob_operation_split_rw ...passed 00:04:01.803 Test: blob_operation_split_rw_iov ...passed 00:04:01.803 Test: blob_simultaneous_operations ...[2024-07-14 21:03:13.151225] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:01.803 [2024-07-14 21:03:13.151298] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:01.803 [2024-07-14 21:03:13.151540] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:01.803 [2024-07-14 21:03:13.151549] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:01.803 [2024-07-14 21:03:13.153747] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:01.803 [2024-07-14 21:03:13.153766] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:01.803 [2024-07-14 21:03:13.153781] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:01.803 [2024-07-14 21:03:13.153788] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:01.803 passed 00:04:01.803 Test: blob_persist_test ...passed 00:04:01.803 Test: blob_decouple_snapshot ...passed 00:04:01.803 Test: blob_seek_io_unit ...passed 00:04:01.803 Test: blob_nested_freezes ...passed 00:04:01.803 Test: blob_clone_resize ...passed 00:04:02.064 Test: blob_shallow_copy ...[2024-07-14 21:03:13.362073] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:04:02.065 [2024-07-14 21:03:13.362149] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:04:02.065 [2024-07-14 21:03:13.362159] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:04:02.065 passed 00:04:02.065 Suite: blob_blob_copy_noextent 00:04:02.065 Test: blob_write ...passed 00:04:02.065 Test: blob_read ...passed 00:04:02.065 Test: blob_rw_verify ...passed 00:04:02.065 Test: blob_rw_verify_iov_nomem ...passed 00:04:02.065 Test: blob_rw_iov_read_only ...passed 00:04:02.065 Test: blob_xattr ...passed 00:04:02.065 Test: blob_dirty_shutdown ...passed 00:04:02.065 Test: blob_is_degraded ...passed 00:04:02.065 Suite: blob_esnap_bs_copy_noextent 00:04:02.323 Test: blob_esnap_create ...passed 00:04:02.323 Test: blob_esnap_thread_add_remove ...passed 00:04:02.323 Test: blob_esnap_clone_snapshot ...passed 00:04:02.323 Test: blob_esnap_clone_inflate ...passed 00:04:02.323 Test: blob_esnap_clone_decouple ...passed 00:04:02.323 Test: blob_esnap_clone_reload ...passed 00:04:02.323 Test: blob_esnap_hotplug ...passed 00:04:02.323 Test: blob_set_parent ...[2024-07-14 21:03:13.826152] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:04:02.323 [2024-07-14 21:03:13.826240] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:04:02.323 [2024-07-14 21:03:13.826275] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:04:02.323 [2024-07-14 21:03:13.826284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:04:02.323 [2024-07-14 21:03:13.826328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:04:02.323 passed 00:04:02.323 Test: blob_set_external_parent ...[2024-07-14 21:03:13.858422] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:04:02.323 [2024-07-14 21:03:13.858481] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:04:02.323 [2024-07-14 21:03:13.858509] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:04:02.323 [2024-07-14 21:03:13.858570] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:04:02.323 passed 00:04:02.323 Suite: blob_copy_extent 00:04:02.323 Test: blob_init ...[2024-07-14 21:03:13.870480] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5491:spdk_bs_init: *ERROR*: unsupported dev block length of 500 00:04:02.582 passed 00:04:02.583 Test: blob_thin_provision ...passed 00:04:02.583 Test: blob_read_only ...passed 00:04:02.583 Test: bs_load ...[2024-07-14 21:03:13.912387] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c: 966:blob_parse: *ERROR*: Blobid (0x0) doesn't match what's in metadata (0x100000000) 00:04:02.583 passed 00:04:02.583 Test: bs_load_custom_cluster_size ...passed 00:04:02.583 Test: bs_load_after_failed_grow ...passed 00:04:02.583 Test: bs_cluster_sz ...[2024-07-14 21:03:13.934828] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3824:bs_opts_verify: *ERROR*: Blobstore options cannot be set to 0 00:04:02.583 [2024-07-14 21:03:13.934922] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5623:spdk_bs_init: *ERROR*: Blobstore metadata cannot use more clusters than is available, please decrease number of pages reserved for metadata or increase cluster size. 00:04:02.583 [2024-07-14 21:03:13.934934] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:3884:bs_alloc: *ERROR*: Cluster size 4095 is smaller than page size 4096 00:04:02.583 passed 00:04:02.583 Test: bs_resize_md ...passed 00:04:02.583 Test: bs_destroy ...passed 00:04:02.583 Test: bs_type ...passed 00:04:02.583 Test: bs_super_block ...passed 00:04:02.583 Test: bs_test_recover_cluster_count ...passed 00:04:02.583 Test: bs_grow_live ...passed 00:04:02.583 Test: bs_grow_live_no_space ...passed 00:04:02.583 Test: bs_test_grow ...passed 00:04:02.583 Test: blob_serialize_test ...passed 00:04:02.583 Test: super_block_crc ...passed 00:04:02.583 Test: blob_thin_prov_write_count_io ...passed 00:04:02.583 Test: blob_thin_prov_unmap_cluster ...passed 00:04:02.583 Test: bs_load_iter_test ...passed 00:04:02.583 Test: blob_relations ...[2024-07-14 21:03:14.074109] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:02.583 [2024-07-14 21:03:14.074170] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:02.583 [2024-07-14 21:03:14.074285] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:02.583 [2024-07-14 21:03:14.074295] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:02.583 passed 00:04:02.583 Test: blob_relations2 ...[2024-07-14 21:03:14.087472] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:02.583 [2024-07-14 21:03:14.087518] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:02.583 [2024-07-14 21:03:14.087542] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:02.583 [2024-07-14 21:03:14.087548] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:02.583 [2024-07-14 21:03:14.087666] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:02.583 [2024-07-14 21:03:14.087677] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:02.583 [2024-07-14 21:03:14.087957] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8386:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot with more than one clone 00:04:02.583 [2024-07-14 21:03:14.087974] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:02.583 passed 00:04:02.583 Test: blob_relations3 ...passed 00:04:02.842 Test: blobstore_clean_power_failure ...passed 00:04:02.842 Test: blob_delete_snapshot_power_failure ...[2024-07-14 21:03:14.235117] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:04:02.842 [2024-07-14 21:03:14.246328] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:04:02.842 [2024-07-14 21:03:14.257020] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:04:02.842 [2024-07-14 21:03:14.257066] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:04:02.842 [2024-07-14 21:03:14.257089] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:02.842 [2024-07-14 21:03:14.268863] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:04:02.842 [2024-07-14 21:03:14.268911] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:04:02.842 [2024-07-14 21:03:14.268919] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:04:02.842 [2024-07-14 21:03:14.268927] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:02.842 [2024-07-14 21:03:14.280823] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:04:02.842 [2024-07-14 21:03:14.280885] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1466:blob_load_snapshot_cpl: *ERROR*: Snapshot fail 00:04:02.842 [2024-07-14 21:03:14.280908] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8300:delete_snapshot_open_clone_cb: *ERROR*: Failed to open clone 00:04:02.842 [2024-07-14 21:03:14.280916] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:02.842 [2024-07-14 21:03:14.293530] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8227:delete_snapshot_sync_snapshot_xattr_cpl: *ERROR*: Failed to sync MD with xattr on blob 00:04:02.842 [2024-07-14 21:03:14.293576] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:02.842 [2024-07-14 21:03:14.305895] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8096:delete_snapshot_sync_clone_cpl: *ERROR*: Failed to sync MD on clone 00:04:02.842 [2024-07-14 21:03:14.305941] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:02.842 [2024-07-14 21:03:14.317905] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8040:delete_snapshot_sync_snapshot_cpl: *ERROR*: Failed to sync MD on blob 00:04:02.842 [2024-07-14 21:03:14.317951] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:02.842 passed 00:04:02.842 Test: blob_create_snapshot_power_failure ...[2024-07-14 21:03:14.354284] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 0 read failed for blobid 0x100000000: -5 00:04:02.842 [2024-07-14 21:03:14.366254] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1579:blob_load_cpl_extents_cpl: *ERROR*: Extent page read failed: -5 00:04:03.101 [2024-07-14 21:03:14.389649] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1670:blob_load_cpl: *ERROR*: Metadata page 2 read failed for blobid 0x100000002: -5 00:04:03.101 [2024-07-14 21:03:14.401497] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6446:bs_clone_snapshot_origblob_cleanup: *ERROR*: Cleanup error -5 00:04:03.101 passed 00:04:03.101 Test: blob_io_unit ...passed 00:04:03.101 Test: blob_io_unit_compatibility ...passed 00:04:03.101 Test: blob_ext_md_pages ...passed 00:04:03.101 Test: blob_esnap_io_4096_4096 ...passed 00:04:03.101 Test: blob_esnap_io_512_512 ...passed 00:04:03.101 Test: blob_esnap_io_4096_512 ...passed 00:04:03.101 Test: blob_esnap_io_512_4096 ...passed 00:04:03.101 Test: blob_esnap_clone_resize ...passed 00:04:03.101 Suite: blob_bs_copy_extent 00:04:03.101 Test: blob_open ...passed 00:04:03.101 Test: blob_create ...[2024-07-14 21:03:14.612034] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -28, size in clusters/size: 65 (clusters) 00:04:03.101 passed 00:04:03.360 Test: blob_create_loop ...passed 00:04:03.360 Test: blob_create_fail ...[2024-07-14 21:03:14.684773] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:03.360 passed 00:04:03.360 Test: blob_create_internal ...passed 00:04:03.360 Test: blob_create_zero_extent ...passed 00:04:03.360 Test: blob_snapshot ...passed 00:04:03.360 Test: blob_clone ...passed 00:04:03.360 Test: blob_inflate ...[2024-07-14 21:03:14.837776] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7109:bs_inflate_blob_open_cpl: *ERROR*: Cannot decouple parent of blob with no parent. 00:04:03.360 passed 00:04:03.360 Test: blob_delete ...passed 00:04:03.360 Test: blob_resize_test ...[2024-07-14 21:03:14.897908] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7845:bs_resize_unfreeze_cpl: *ERROR*: Unfreeze failed, ctx->rc=-28 00:04:03.619 passed 00:04:03.619 Test: blob_resize_thin_test ...passed 00:04:03.619 Test: channel_ops ...passed 00:04:03.619 Test: blob_super ...passed 00:04:03.619 Test: blob_rw_verify_iov ...passed 00:04:03.619 Test: blob_unmap ...passed 00:04:03.619 Test: blob_iter ...passed 00:04:03.619 Test: blob_parse_md ...passed 00:04:03.619 Test: bs_load_pending_removal ...passed 00:04:03.878 Test: bs_unload ...[2024-07-14 21:03:15.168182] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:5878:spdk_bs_unload: *ERROR*: Blobstore still has open blobs 00:04:03.878 passed 00:04:03.878 Test: bs_usable_clusters ...passed 00:04:03.878 Test: blob_crc ...[2024-07-14 21:03:15.226899] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:04:03.878 [2024-07-14 21:03:15.226960] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:1679:blob_load_cpl: *ERROR*: Metadata page 0 crc mismatch for blobid 0x100000000 00:04:03.878 passed 00:04:03.878 Test: blob_flags ...passed 00:04:03.878 Test: bs_version ...passed 00:04:03.878 Test: blob_set_xattrs_test ...[2024-07-14 21:03:15.321674] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:03.878 [2024-07-14 21:03:15.321735] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:6328:bs_create_blob: *ERROR*: Failed to create blob: Unknown error -22, size in clusters/size: 0 (clusters) 00:04:03.878 passed 00:04:03.878 Test: blob_thin_prov_alloc ...passed 00:04:03.878 Test: blob_insert_cluster_msg_test ...passed 00:04:04.136 Test: blob_thin_prov_rw ...passed 00:04:04.137 Test: blob_thin_prov_rle ...passed 00:04:04.137 Test: blob_thin_prov_rw_iov ...passed 00:04:04.137 Test: blob_snapshot_rw ...passed 00:04:04.137 Test: blob_snapshot_rw_iov ...passed 00:04:04.137 Test: blob_inflate_rw ...passed 00:04:04.137 Test: blob_snapshot_freeze_io ...passed 00:04:04.395 Test: blob_operation_split_rw ...passed 00:04:04.395 Test: blob_operation_split_rw_iov ...passed 00:04:04.395 Test: blob_simultaneous_operations ...[2024-07-14 21:03:15.772005] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:04.395 [2024-07-14 21:03:15.772080] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:04.395 [2024-07-14 21:03:15.772296] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:04.395 [2024-07-14 21:03:15.772305] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:04.395 [2024-07-14 21:03:15.774933] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:04.395 [2024-07-14 21:03:15.774953] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:04.395 [2024-07-14 21:03:15.774969] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8413:bs_is_blob_deletable: *ERROR*: Cannot remove snapshot because it is open 00:04:04.395 [2024-07-14 21:03:15.774977] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:8353:bs_delete_blob_finish: *ERROR*: Failed to remove blob 00:04:04.395 passed 00:04:04.395 Test: blob_persist_test ...passed 00:04:04.395 Test: blob_decouple_snapshot ...passed 00:04:04.395 Test: blob_seek_io_unit ...passed 00:04:04.395 Test: blob_nested_freezes ...passed 00:04:04.653 Test: blob_clone_resize ...passed 00:04:04.653 Test: blob_shallow_copy ...[2024-07-14 21:03:15.970003] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7332:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, blob must be read only 00:04:04.653 [2024-07-14 21:03:15.970092] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7343:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device must have at least blob size 00:04:04.653 [2024-07-14 21:03:15.970102] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7351:bs_shallow_copy_blob_open_cpl: *ERROR*: blob 0x100000000 shallow copy, external device block size is not compatible with blobstore block size 00:04:04.653 passed 00:04:04.653 Suite: blob_blob_copy_extent 00:04:04.653 Test: blob_write ...passed 00:04:04.653 Test: blob_read ...passed 00:04:04.653 Test: blob_rw_verify ...passed 00:04:04.653 Test: blob_rw_verify_iov_nomem ...passed 00:04:04.653 Test: blob_rw_iov_read_only ...passed 00:04:04.653 Test: blob_xattr ...passed 00:04:04.653 Test: blob_dirty_shutdown ...passed 00:04:04.911 Test: blob_is_degraded ...passed 00:04:04.911 Suite: blob_esnap_bs_copy_extent 00:04:04.911 Test: blob_esnap_create ...passed 00:04:04.911 Test: blob_esnap_thread_add_remove ...passed 00:04:04.911 Test: blob_esnap_clone_snapshot ...passed 00:04:04.911 Test: blob_esnap_clone_inflate ...passed 00:04:04.911 Test: blob_esnap_clone_decouple ...passed 00:04:04.911 Test: blob_esnap_clone_reload ...passed 00:04:04.911 Test: blob_esnap_hotplug ...passed 00:04:05.169 Test: blob_set_parent ...[2024-07-14 21:03:16.467862] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7613:spdk_bs_blob_set_parent: *ERROR*: snapshot id not valid 00:04:05.169 [2024-07-14 21:03:16.467904] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7619:spdk_bs_blob_set_parent: *ERROR*: blob id and snapshot id cannot be the same 00:04:05.169 [2024-07-14 21:03:16.467926] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7548:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob is not a snapshot 00:04:05.169 [2024-07-14 21:03:16.467936] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7555:bs_set_parent_snapshot_open_cpl: *ERROR*: parent blob has a number of clusters different from child's ones 00:04:05.169 [2024-07-14 21:03:16.468146] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7594:bs_set_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:04:05.169 passed 00:04:05.169 Test: blob_set_external_parent ...[2024-07-14 21:03:16.502384] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7787:spdk_bs_blob_set_external_parent: *ERROR*: blob id and external snapshot id cannot be the same 00:04:05.169 [2024-07-14 21:03:16.502447] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7796:spdk_bs_blob_set_external_parent: *ERROR*: Esnap device size 61440 is not an integer multiple of cluster size 16384 00:04:05.169 [2024-07-14 21:03:16.502471] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7748:bs_set_external_parent_blob_open_cpl: *ERROR*: external snapshot is already the parent of blob 00:04:05.169 [2024-07-14 21:03:16.502513] /home/vagrant/spdk_repo/spdk/lib/blob/blobstore.c:7754:bs_set_external_parent_blob_open_cpl: *ERROR*: blob is not thin-provisioned 00:04:05.169 passed 00:04:05.169 00:04:05.169 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.169 suites 16 16 n/a 0 0 00:04:05.169 tests 376 376 376 0 0 00:04:05.169 asserts 143965 143965 143965 0 n/a 00:04:05.169 00:04:05.169 Elapsed time = 10.500 seconds 00:04:05.169 21:03:16 unittest.unittest_blob_blobfs -- unit/unittest.sh@42 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blob/blob_bdev.c/blob_bdev_ut 00:04:05.169 00:04:05.169 00:04:05.169 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.169 http://cunit.sourceforge.net/ 00:04:05.169 00:04:05.169 00:04:05.169 Suite: blob_bdev 00:04:05.169 Test: create_bs_dev ...passed 00:04:05.169 Test: create_bs_dev_ro ...[2024-07-14 21:03:16.525832] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 529:spdk_bdev_create_bs_dev: *ERROR*: bdev name 'nope': unsupported options 00:04:05.169 passed 00:04:05.169 Test: create_bs_dev_rw ...passed 00:04:05.169 Test: claim_bs_dev ...passed 00:04:05.169 Test: claim_bs_dev_ro ...passed 00:04:05.169 Test: deferred_destroy_refs ...passed 00:04:05.169 Test: deferred_destroy_channels ...passed 00:04:05.169 Test: deferred_destroy_threads ...passed 00:04:05.169 00:04:05.169 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.169 suites 1 1 n/a 0 0 00:04:05.169 tests 8 8 8 0 0 00:04:05.169 asserts 119 119 119 0 n/a 00:04:05.169 00:04:05.169 Elapsed time = 0.000 seconds 00:04:05.169 [2024-07-14 21:03:16.526139] /home/vagrant/spdk_repo/spdk/module/blob/bdev/blob_bdev.c: 340:spdk_bs_bdev_claim: *ERROR*: could not claim bs dev 00:04:05.169 21:03:16 unittest.unittest_blob_blobfs -- unit/unittest.sh@43 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/tree.c/tree_ut 00:04:05.169 00:04:05.169 00:04:05.169 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.169 http://cunit.sourceforge.net/ 00:04:05.169 00:04:05.169 00:04:05.169 Suite: tree 00:04:05.169 Test: blobfs_tree_op_test ...passed 00:04:05.169 00:04:05.169 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.169 suites 1 1 n/a 0 0 00:04:05.169 tests 1 1 1 0 0 00:04:05.169 asserts 27 27 27 0 n/a 00:04:05.169 00:04:05.169 Elapsed time = 0.000 seconds 00:04:05.169 21:03:16 unittest.unittest_blob_blobfs -- unit/unittest.sh@44 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_async_ut/blobfs_async_ut 00:04:05.169 00:04:05.169 00:04:05.169 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.169 http://cunit.sourceforge.net/ 00:04:05.169 00:04:05.169 00:04:05.169 Suite: blobfs_async_ut 00:04:05.169 Test: fs_init ...passed 00:04:05.169 Test: fs_open ...passed 00:04:05.169 Test: fs_create ...passed 00:04:05.169 Test: fs_truncate ...passed 00:04:05.169 Test: fs_rename ...[2024-07-14 21:03:16.632880] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=file1 to deleted 00:04:05.169 passed 00:04:05.169 Test: fs_rw_async ...passed 00:04:05.169 Test: fs_writev_readv_async ...passed 00:04:05.169 Test: tree_find_buffer_ut ...passed 00:04:05.169 Test: channel_ops ...passed 00:04:05.169 Test: channel_ops_sync ...passed 00:04:05.169 00:04:05.169 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.169 suites 1 1 n/a 0 0 00:04:05.170 tests 10 10 10 0 0 00:04:05.170 asserts 292 292 292 0 n/a 00:04:05.170 00:04:05.170 Elapsed time = 0.141 seconds 00:04:05.170 21:03:16 unittest.unittest_blob_blobfs -- unit/unittest.sh@46 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_sync_ut/blobfs_sync_ut 00:04:05.170 00:04:05.170 00:04:05.170 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.170 http://cunit.sourceforge.net/ 00:04:05.170 00:04:05.170 00:04:05.170 Suite: blobfs_sync_ut 00:04:05.430 Test: cache_read_after_write ...[2024-07-14 21:03:16.735565] /home/vagrant/spdk_repo/spdk/lib/blobfs/blobfs.c:1478:spdk_fs_delete_file_async: *ERROR*: Cannot find the file=testfile to deleted 00:04:05.430 passed 00:04:05.430 Test: file_length ...passed 00:04:05.430 Test: append_write_to_extend_blob ...passed 00:04:05.430 Test: partial_buffer ...passed 00:04:05.430 Test: cache_write_null_buffer ...passed 00:04:05.430 Test: fs_create_sync ...passed 00:04:05.430 Test: fs_rename_sync ...passed 00:04:05.430 Test: cache_append_no_cache ...passed 00:04:05.430 Test: fs_delete_file_without_close ...passed 00:04:05.430 00:04:05.430 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.430 suites 1 1 n/a 0 0 00:04:05.430 tests 9 9 9 0 0 00:04:05.430 asserts 345 345 345 0 n/a 00:04:05.430 00:04:05.430 Elapsed time = 0.266 seconds 00:04:05.430 21:03:16 unittest.unittest_blob_blobfs -- unit/unittest.sh@47 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/blobfs/blobfs_bdev.c/blobfs_bdev_ut 00:04:05.430 00:04:05.430 00:04:05.430 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.430 http://cunit.sourceforge.net/ 00:04:05.430 00:04:05.430 00:04:05.430 Suite: blobfs_bdev_ut 00:04:05.430 Test: spdk_blobfs_bdev_detect_test ...[2024-07-14 21:03:16.843223] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:04:05.430 passed 00:04:05.430 Test: spdk_blobfs_bdev_create_test ...[2024-07-14 21:03:16.843538] /home/vagrant/spdk_repo/spdk/module/blobfs/bdev/blobfs_bdev.c: 59:_blobfs_bdev_unload_cb: *ERROR*: Failed to unload blobfs on bdev ut_bdev: errno -1 00:04:05.430 passed 00:04:05.430 Test: spdk_blobfs_bdev_mount_test ...passed 00:04:05.430 00:04:05.430 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.430 suites 1 1 n/a 0 0 00:04:05.430 tests 3 3 3 0 0 00:04:05.430 asserts 9 9 9 0 n/a 00:04:05.430 00:04:05.430 Elapsed time = 0.000 seconds 00:04:05.430 00:04:05.430 real 0m10.842s 00:04:05.430 user 0m10.750s 00:04:05.430 sys 0m0.226s 00:04:05.430 21:03:16 unittest.unittest_blob_blobfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.430 ************************************ 00:04:05.430 END TEST unittest_blob_blobfs 00:04:05.430 ************************************ 00:04:05.430 21:03:16 unittest.unittest_blob_blobfs -- common/autotest_common.sh@10 -- # set +x 00:04:05.430 21:03:16 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:05.430 21:03:16 unittest -- unit/unittest.sh@234 -- # run_test unittest_event unittest_event 00:04:05.430 21:03:16 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.430 21:03:16 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.430 21:03:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:05.430 ************************************ 00:04:05.430 START TEST unittest_event 00:04:05.430 ************************************ 00:04:05.430 21:03:16 unittest.unittest_event -- common/autotest_common.sh@1123 -- # unittest_event 00:04:05.430 21:03:16 unittest.unittest_event -- unit/unittest.sh@51 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/app.c/app_ut 00:04:05.430 00:04:05.430 00:04:05.430 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.430 http://cunit.sourceforge.net/ 00:04:05.430 00:04:05.430 00:04:05.430 Suite: app_suite 00:04:05.430 Test: test_spdk_app_parse_args ...app_ut [options] 00:04:05.430 00:04:05.430 CPU options: 00:04:05.430 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:04:05.430 (like [0,1,10]) 00:04:05.430 --lcores lcore to CPU mapping list. The list is in the format: 00:04:05.430 [<,lcores[@CPUs]>...] 00:04:05.430 app_ut: invalid option -- z 00:04:05.430 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:04:05.430 Within the group, '-' is used for range separator, 00:04:05.430 ',' is used for single number separator. 00:04:05.430 '( )' can be omitted for single element group, 00:04:05.430 '@' can be omitted if cpus and lcores have the same value 00:04:05.430 --disable-cpumask-locks Disable CPU core lock files. 00:04:05.430 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:04:05.430 pollers in the app support interrupt mode) 00:04:05.430 -p, --main-core main (primary) core for DPDK 00:04:05.430 00:04:05.430 Configuration options: 00:04:05.430 -c, --config, --json JSON config file 00:04:05.430 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:04:05.430 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:04:05.430 --wait-for-rpc wait for RPCs to initialize subsystems 00:04:05.430 --rpcs-allowed comma-separated list of permitted RPCS 00:04:05.430 --json-ignore-init-errors don't exit on invalid config entry 00:04:05.430 00:04:05.430 Memory options: 00:04:05.430 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:04:05.430 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:04:05.430 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:04:05.430 -R, --huge-unlink unlink huge files after initialization 00:04:05.430 -n, --mem-channels number of memory channels used for DPDK 00:04:05.430 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:04:05.430 --msg-mempool-size global message memory pool size in count (default: 262143) 00:04:05.430 --no-huge run without using hugepages 00:04:05.430 -i, --shm-id shared memory ID (optional) 00:04:05.430 -g, --single-file-segments force creating just one hugetlbfs file 00:04:05.430 00:04:05.430 PCI options: 00:04:05.430 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:04:05.430 -B, --pci-blocked pci addr to block (can be used more than once) 00:04:05.430 -u, --no-pci disable PCI access 00:04:05.430 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:04:05.430 00:04:05.430 Log options: 00:04:05.430 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:04:05.430 --silence-noticelog disable notice level logging to stderr 00:04:05.430 00:04:05.430 Trace options: 00:04:05.430 --num-trace-entries number of trace entries for each core, must be power of 2, 00:04:05.430 setting 0 to disable trace (default 32768) 00:04:05.430 Tracepoints vary in size and can use more than one trace entry. 00:04:05.430 -e, --tpoint-group [:] 00:04:05.430 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:04:05.430 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:04:05.430 a tracepoint group. First tpoint inside a group can be enabled by 00:04:05.430 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:04:05.430 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:04:05.430 in /include/spdk_internal/trace_defs.h 00:04:05.430 00:04:05.430 Other options: 00:04:05.430 -h, --help show this usage 00:04:05.430 -v, --version print SPDK version 00:04:05.430 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:04:05.430 --env-context Opaque context for use of the env implementation 00:04:05.430 app_ut [options] 00:04:05.430 00:04:05.430 CPU options: 00:04:05.430 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:04:05.430 (like [0,1,10]) 00:04:05.430 --lcores lcore to CPU mapping list. The list is in the format: 00:04:05.430 [<,lcores[@CPUs]>...] 00:04:05.430 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:04:05.430 Within the group, '-' is used for range separator, 00:04:05.430 ',' is used for single number separator. 00:04:05.430 '( )' can be omitted for single element group, 00:04:05.430 '@' can be omitted if cpus and lcores have the same value 00:04:05.430 --disable-cpumask-locks Disable CPU core lock files. 00:04:05.430 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:04:05.430 pollers in the app support interrupt mode) 00:04:05.430 -p, --main-core main (primary) core for DPDK 00:04:05.430 00:04:05.430 Configuration options: 00:04:05.430 -c, --config, --json JSON config file 00:04:05.430 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:04:05.430 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:04:05.430 --wait-for-rpc wait for RPCs to initialize subsystems 00:04:05.430 --rpcs-allowed comma-separated list of permitted RPCS 00:04:05.430 --json-ignore-init-errors don't exit on invalid config entry 00:04:05.430 00:04:05.430 Memory options: 00:04:05.430 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:04:05.430 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:04:05.430 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:04:05.430 -R, --huge-unlink unlink huge files after initialization 00:04:05.430 -n, --mem-channels number of memory channels used for DPDK 00:04:05.430 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:04:05.430 --msg-mempool-size global message memory pool size in count (default: 262143) 00:04:05.430 --no-huge run without using hugepages 00:04:05.430 -i, --shm-id shared memory ID (optional) 00:04:05.430 -g, --single-file-segments force creating just one hugetlbfs file 00:04:05.430 00:04:05.430 PCI options: 00:04:05.430 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:04:05.430 -B, --pci-blocked pci addr to block (can be used more than once) 00:04:05.430 -u, --no-pci disable PCI access 00:04:05.430 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:04:05.430 00:04:05.430 Log options: 00:04:05.431 -L, --logflag enable log flag (all, app_rpc, json_util, rpc, thread, trace) 00:04:05.431 --silence-noticelog disable notice level logging to stderr 00:04:05.431 00:04:05.431 Trace options: 00:04:05.431 --num-trace-entries number of trace entries for each core, must be power of 2, 00:04:05.431 setting 0 to disable trace (default 32768) 00:04:05.431 Tracepoints vary in size and can use more than one trace entry. 00:04:05.431 -e, --tpoint-group [:] 00:04:05.431 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:04:05.431 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:04:05.431 a tracepoint group. First tpoint inside a group can be enabled by 00:04:05.431 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:04:05.431 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:04:05.431 in /include/spdk_internal/trace_defs.h 00:04:05.431 00:04:05.431 Other options: 00:04:05.431 -h, --help show this usage 00:04:05.431 -v, --version print SPDK version 00:04:05.431 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:04:05.431 --env-context Opaque context for use of the env implementation 00:04:05.431 app_ut [options] 00:04:05.431 00:04:05.431 CPU options: 00:04:05.431 -m, --cpumask core mask (like 0xF) or core list of '[]' embraced for DPDK 00:04:05.431 (like [0,1,10]) 00:04:05.431 --lcores lcore to CPU mapping list. The list is in the format: 00:04:05.431 [<,lcores[@CPUs]>...] 00:04:05.431 lcores and cpus list are grouped by '(' and ')', e.g '--lcores "(5-7)@(10-12)"' 00:04:05.431 Within the group, '-' is used for range separator, 00:04:05.431 ',' is used for single number separator. 00:04:05.431 '( )' can be omitted for single element group, 00:04:05.431 '@' can be omitted if cpus and lcores have the same value 00:04:05.431 --disable-cpumask-locks Disable CPU core lock files. 00:04:05.431 --interrupt-mode set app to interrupt mode (Warning: CPU usage will be reduced only if all 00:04:05.431 pollers in the app support interrupt mode) 00:04:05.431 -p, --main-core main (primary) core for DPDK 00:04:05.431 00:04:05.431 Configuration options: 00:04:05.431 -c, --config, --json JSON config file 00:04:05.431 -r, --rpc-socket RPC listen address (default /var/tmp/spdk.sock) 00:04:05.431 --no-rpc-server skip RPC server initialization. This option ignores '--rpc-socket' value. 00:04:05.431 --wait-for-rpc wait for RPCs to initialize subsystems 00:04:05.431 --rpcs-allowed comma-separated list of permitted RPCS 00:04:05.431 --json-ignore-init-errors don't exit on invalid config entry 00:04:05.431 00:04:05.431 Memory options: 00:04:05.431 --iova-mode set IOVA mode ('pa' for IOVA_PA and 'va' for IOVA_VA) 00:04:05.431 --base-virtaddr the base virtual address for DPDK (default: 0x200000000000) 00:04:05.431 --huge-dir use a specific hugetlbfs mount to reserve memory from 00:04:05.431 -R, --huge-unlink unlink huge files after initialization 00:04:05.431 -n, --mem-channels number of memory channels used for DPDK 00:04:05.431 -s, --mem-size memory size in MB for DPDK (default: all hugepage memory) 00:04:05.431 --msg-mempool-size global message memory pool size in count (default: 262143) 00:04:05.431 --no-huge run without using hugepages 00:04:05.431 -i, --shm-id shared memory ID (optional) 00:04:05.431 -g, --single-file-segments force creating just one hugetlbfs file 00:04:05.431 00:04:05.431 PCI options: 00:04:05.431 -A, --pci-allowed pci addr to allow (-B and -A cannot be used at the same time) 00:04:05.431 -B, --pci-blocked pci addr to block (can be used more than once) 00:04:05.431 -u, --no-pci disable PCI access 00:04:05.431 --vfio-vf-token VF token (UUID) shared between SR-IOV PF and VFs for vfio_pci driver 00:04:05.431 00:04:05.431 Log options: 00:04:05.431 -L, --logflag enable log flag (all, app_rpc, app_ut: unrecognized option `--test-long-opt' 00:04:05.431 [2024-07-14 21:03:16.893950] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1192:spdk_app_parse_args: *ERROR*: Duplicated option 'c' between app-specific command line parameter and generic spdk opts. 00:04:05.431 [2024-07-14 21:03:16.894184] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1372:spdk_app_parse_args: *ERROR*: -B and -W cannot be used at the same time 00:04:05.431 json_util, rpc, thread, trace) 00:04:05.431 --silence-noticelog disable notice level logging to stderr 00:04:05.431 00:04:05.431 Trace options: 00:04:05.431 --num-trace-entries number of trace entries for each core, must be power of 2, 00:04:05.431 setting 0 to disable trace (default 32768) 00:04:05.431 Tracepoints vary in size and can use more than one trace entry. 00:04:05.431 -e, --tpoint-group [:] 00:04:05.431 group_name - tracepoint group name for spdk trace buffers (thread, all). 00:04:05.431 tpoint_mask - tracepoint mask for enabling individual tpoints inside 00:04:05.431 a tracepoint group. First tpoint inside a group can be enabled by 00:04:05.431 setting tpoint_mask to 1 (e.g. bdev:0x1). Groups and masks can be 00:04:05.431 combined (e.g. thread,bdev:0x1). All available tpoints can be found 00:04:05.431 in /include/spdk_internal/trace_defs.h 00:04:05.431 00:04:05.431 Other options: 00:04:05.431 -h, --help show this usage 00:04:05.431 -v, --version print SPDK version 00:04:05.431 -d, --limit-coredump do not set max coredump size to RLIM_INFINITY 00:04:05.431 --env-context Opaque context for use of the env implementation 00:04:05.431 passed 00:04:05.431 00:04:05.431 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.431 suites 1 1 n/a 0 0 00:04:05.431 tests 1 1 1 0 0 00:04:05.431 asserts 8 8 8 0 n/a 00:04:05.431 00:04:05.431 Elapsed time = 0.000 seconds 00:04:05.431 [2024-07-14 21:03:16.894317] /home/vagrant/spdk_repo/spdk/lib/event/app.c:1277:spdk_app_parse_args: *ERROR*: Invalid main core --single-file-segments 00:04:05.431 21:03:16 unittest.unittest_event -- unit/unittest.sh@52 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/event/reactor.c/reactor_ut 00:04:05.431 00:04:05.431 00:04:05.431 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.431 http://cunit.sourceforge.net/ 00:04:05.431 00:04:05.431 00:04:05.431 Suite: app_suite 00:04:05.431 Test: test_create_reactor ...passed 00:04:05.431 Test: test_init_reactors ...passed 00:04:05.431 Test: test_event_call ...passed 00:04:05.431 Test: test_schedule_thread ...passed 00:04:05.431 Test: test_reschedule_thread ...passed 00:04:05.431 Test: test_bind_thread ...passed 00:04:05.431 Test: test_for_each_reactor ...passed 00:04:05.431 Test: test_reactor_stats ...passed 00:04:05.431 Test: test_scheduler ...passed 00:04:05.431 Test: test_governor ...passed 00:04:05.431 00:04:05.431 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.431 suites 1 1 n/a 0 0 00:04:05.431 tests 10 10 10 0 0 00:04:05.431 asserts 336 336 336 0 n/a 00:04:05.431 00:04:05.431 Elapsed time = 0.000 seconds 00:04:05.431 00:04:05.431 real 0m0.015s 00:04:05.431 user 0m0.001s 00:04:05.431 sys 0m0.016s 00:04:05.431 21:03:16 unittest.unittest_event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.431 21:03:16 unittest.unittest_event -- common/autotest_common.sh@10 -- # set +x 00:04:05.431 ************************************ 00:04:05.431 END TEST unittest_event 00:04:05.431 ************************************ 00:04:05.431 21:03:16 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:05.431 21:03:16 unittest -- unit/unittest.sh@235 -- # uname -s 00:04:05.431 21:03:16 unittest -- unit/unittest.sh@235 -- # '[' FreeBSD = Linux ']' 00:04:05.431 21:03:16 unittest -- unit/unittest.sh@239 -- # run_test unittest_accel /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:04:05.431 21:03:16 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.431 21:03:16 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.431 21:03:16 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:05.431 ************************************ 00:04:05.431 START TEST unittest_accel 00:04:05.431 ************************************ 00:04:05.431 21:03:16 unittest.unittest_accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/accel/accel.c/accel_ut 00:04:05.431 00:04:05.431 00:04:05.431 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.431 http://cunit.sourceforge.net/ 00:04:05.431 00:04:05.431 00:04:05.431 Suite: accel_sequence 00:04:05.431 Test: test_sequence_fill_copy ...passed 00:04:05.431 Test: test_sequence_abort ...passed 00:04:05.431 Test: test_sequence_append_error ...passed 00:04:05.431 Test: test_sequence_completion_error ...[2024-07-14 21:03:16.958292] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1946:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x3f465fecef00 00:04:05.431 passed 00:04:05.431 Test: test_sequence_decompress ...[2024-07-14 21:03:16.958592] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1946:accel_sequence_task_cb: *ERROR*: Failed to execute decompress operation, sequence: 0x3f465fecef00 00:04:05.431 [2024-07-14 21:03:16.958618] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1856:accel_process_sequence: *ERROR*: Failed to submit fill operation, sequence: 0x3f465fecef00 00:04:05.431 [2024-07-14 21:03:16.958638] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1856:accel_process_sequence: *ERROR*: Failed to submit decompress operation, sequence: 0x3f465fecef00 00:04:05.431 passed 00:04:05.431 Test: test_sequence_reverse ...passed 00:04:05.431 Test: test_sequence_copy_elision ...passed 00:04:05.431 Test: test_sequence_accel_buffers ...passed 00:04:05.431 Test: test_sequence_memory_domain ...passed 00:04:05.431 Test: test_sequence_module_memory_domain ...[2024-07-14 21:03:16.961092] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1748:accel_task_pull_data: *ERROR*: Failed to pull data from memory domain: UT_DMA, rc: -7 00:04:05.431 [2024-07-14 21:03:16.961130] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1787:accel_task_push_data: *ERROR*: Failed to push data to memory domain: UT_DMA, rc: -48 00:04:05.431 passed 00:04:05.431 Test: test_sequence_crypto ...passed 00:04:05.431 Test: test_sequence_driver ...[2024-07-14 21:03:16.962471] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1895:accel_process_sequence: *ERROR*: Failed to execute sequence: 0x3f465fecebc0 using driver: ut 00:04:05.431 [2024-07-14 21:03:16.962515] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c:1960:accel_sequence_task_cb: *ERROR*: Failed to execute fill operation, sequence: 0x3f465fecebc0 through driver: ut 00:04:05.431 passed 00:04:05.431 Test: test_sequence_same_iovs ...passed 00:04:05.431 Test: test_sequence_crc32 ...passed 00:04:05.431 Suite: accel 00:04:05.431 Test: test_spdk_accel_task_complete ...passed 00:04:05.431 Test: test_get_task ...passed 00:04:05.431 Test: test_spdk_accel_submit_copy ...passed 00:04:05.431 Test: test_spdk_accel_submit_dualcast ...[2024-07-14 21:03:16.963507] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:04:05.432 passed 00:04:05.432 Test: test_spdk_accel_submit_compare ...passed 00:04:05.432 Test: test_spdk_accel_submit_fill ...passed 00:04:05.432 Test: test_spdk_accel_submit_crc32c ...passed 00:04:05.432 Test: test_spdk_accel_submit_crc32cv ...passed 00:04:05.432 Test: test_spdk_accel_submit_copy_crc32c ...passed 00:04:05.432 Test: test_spdk_accel_submit_xor ...passed 00:04:05.432 Test: test_spdk_accel_module_find_by_name ...passed 00:04:05.432 Test: test_spdk_accel_module_register ...[2024-07-14 21:03:16.963530] /home/vagrant/spdk_repo/spdk/lib/accel/accel.c: 422:spdk_accel_submit_dualcast: *ERROR*: Dualcast requires 4K alignment on dst addresses 00:04:05.432 passed 00:04:05.432 00:04:05.432 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.432 suites 2 2 n/a 0 0 00:04:05.432 tests 26 26 26 0 0 00:04:05.432 asserts 830 830 830 0 n/a 00:04:05.432 00:04:05.432 Elapsed time = 0.008 seconds 00:04:05.432 00:04:05.432 real 0m0.016s 00:04:05.432 user 0m0.015s 00:04:05.432 sys 0m0.000s 00:04:05.432 21:03:16 unittest.unittest_accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.432 ************************************ 00:04:05.432 END TEST unittest_accel 00:04:05.432 ************************************ 00:04:05.432 21:03:16 unittest.unittest_accel -- common/autotest_common.sh@10 -- # set +x 00:04:05.691 21:03:16 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:05.691 21:03:16 unittest -- unit/unittest.sh@240 -- # run_test unittest_ioat /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:04:05.691 21:03:16 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.691 21:03:16 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.691 21:03:17 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:05.691 ************************************ 00:04:05.691 START TEST unittest_ioat 00:04:05.691 ************************************ 00:04:05.691 21:03:17 unittest.unittest_ioat -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/ioat/ioat.c/ioat_ut 00:04:05.691 00:04:05.691 00:04:05.691 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.691 http://cunit.sourceforge.net/ 00:04:05.691 00:04:05.691 00:04:05.691 Suite: ioat 00:04:05.691 Test: ioat_state_check ...passed 00:04:05.691 00:04:05.691 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.691 suites 1 1 n/a 0 0 00:04:05.691 tests 1 1 1 0 0 00:04:05.691 asserts 32 32 32 0 n/a 00:04:05.691 00:04:05.691 Elapsed time = 0.000 seconds 00:04:05.691 00:04:05.691 real 0m0.004s 00:04:05.691 user 0m0.006s 00:04:05.691 sys 0m0.003s 00:04:05.691 21:03:17 unittest.unittest_ioat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.691 ************************************ 00:04:05.691 21:03:17 unittest.unittest_ioat -- common/autotest_common.sh@10 -- # set +x 00:04:05.691 END TEST unittest_ioat 00:04:05.691 ************************************ 00:04:05.691 21:03:17 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:05.691 21:03:17 unittest -- unit/unittest.sh@241 -- # grep -q '#define SPDK_CONFIG_IDXD 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:05.691 21:03:17 unittest -- unit/unittest.sh@242 -- # run_test unittest_idxd_user /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:04:05.691 21:03:17 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.691 21:03:17 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.691 21:03:17 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:05.691 ************************************ 00:04:05.691 START TEST unittest_idxd_user 00:04:05.691 ************************************ 00:04:05.691 21:03:17 unittest.unittest_idxd_user -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/idxd/idxd_user.c/idxd_user_ut 00:04:05.691 00:04:05.691 00:04:05.691 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.691 http://cunit.sourceforge.net/ 00:04:05.691 00:04:05.691 00:04:05.691 Suite: idxd_user 00:04:05.691 Test: test_idxd_wait_cmd ...passed 00:04:05.691 Test: test_idxd_reset_dev ...[2024-07-14 21:03:17.061144] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:04:05.691 [2024-07-14 21:03:17.061381] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 46:idxd_wait_cmd: *ERROR*: Command timeout, waited 1 00:04:05.691 [2024-07-14 21:03:17.061426] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 52:idxd_wait_cmd: *ERROR*: Command status reg reports error 0x1 00:04:05.691 [2024-07-14 21:03:17.061441] /home/vagrant/spdk_repo/spdk/lib/idxd/idxd_user.c: 132:idxd_reset_dev: *ERROR*: Error resetting device 4294967274 00:04:05.691 passed 00:04:05.691 Test: test_idxd_group_config ...passed 00:04:05.691 Test: test_idxd_wq_config ...passed 00:04:05.691 00:04:05.691 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.691 suites 1 1 n/a 0 0 00:04:05.691 tests 4 4 4 0 0 00:04:05.691 asserts 20 20 20 0 n/a 00:04:05.691 00:04:05.691 Elapsed time = 0.000 seconds 00:04:05.691 00:04:05.691 real 0m0.006s 00:04:05.691 user 0m0.005s 00:04:05.691 sys 0m0.004s 00:04:05.691 21:03:17 unittest.unittest_idxd_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.691 ************************************ 00:04:05.691 END TEST unittest_idxd_user 00:04:05.691 ************************************ 00:04:05.691 21:03:17 unittest.unittest_idxd_user -- common/autotest_common.sh@10 -- # set +x 00:04:05.691 21:03:17 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:05.691 21:03:17 unittest -- unit/unittest.sh@244 -- # run_test unittest_iscsi unittest_iscsi 00:04:05.691 21:03:17 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.691 21:03:17 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.691 21:03:17 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:05.691 ************************************ 00:04:05.691 START TEST unittest_iscsi 00:04:05.691 ************************************ 00:04:05.691 21:03:17 unittest.unittest_iscsi -- common/autotest_common.sh@1123 -- # unittest_iscsi 00:04:05.691 21:03:17 unittest.unittest_iscsi -- unit/unittest.sh@68 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/conn.c/conn_ut 00:04:05.691 00:04:05.691 00:04:05.691 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.691 http://cunit.sourceforge.net/ 00:04:05.691 00:04:05.691 00:04:05.691 Suite: conn_suite 00:04:05.691 Test: read_task_split_in_order_case ...passed 00:04:05.691 Test: read_task_split_reverse_order_case ...passed 00:04:05.691 Test: propagate_scsi_error_status_for_split_read_tasks ...passed 00:04:05.691 Test: process_non_read_task_completion_test ...passed 00:04:05.691 Test: free_tasks_on_connection ...passed 00:04:05.691 Test: free_tasks_with_queued_datain ...passed 00:04:05.691 Test: abort_queued_datain_task_test ...passed 00:04:05.691 Test: abort_queued_datain_tasks_test ...passed 00:04:05.691 00:04:05.691 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.691 suites 1 1 n/a 0 0 00:04:05.691 tests 8 8 8 0 0 00:04:05.691 asserts 230 230 230 0 n/a 00:04:05.691 00:04:05.691 Elapsed time = 0.000 seconds 00:04:05.691 21:03:17 unittest.unittest_iscsi -- unit/unittest.sh@69 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/param.c/param_ut 00:04:05.691 00:04:05.691 00:04:05.691 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.691 http://cunit.sourceforge.net/ 00:04:05.691 00:04:05.691 00:04:05.691 Suite: iscsi_suite 00:04:05.691 Test: param_negotiation_test ...passed 00:04:05.691 Test: list_negotiation_test ...passed 00:04:05.691 Test: parse_valid_test ...passed 00:04:05.691 Test: parse_invalid_test ...[2024-07-14 21:03:17.115347] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:04:05.691 [2024-07-14 21:03:17.115606] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 201:iscsi_parse_param: *ERROR*: '=' not found 00:04:05.691 [2024-07-14 21:03:17.115639] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 207:iscsi_parse_param: *ERROR*: Empty key 00:04:05.691 passed 00:04:05.691 00:04:05.691 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.691 suites 1 1 n/a 0 0 00:04:05.691 tests 4 4 4 0 0 00:04:05.691 asserts 161 161 161 0 n/a 00:04:05.691 00:04:05.691 Elapsed time = 0.000 seconds 00:04:05.692 [2024-07-14 21:03:17.115676] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 8193 00:04:05.692 [2024-07-14 21:03:17.115697] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 247:iscsi_parse_param: *ERROR*: Overflow Val 256 00:04:05.692 [2024-07-14 21:03:17.115714] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 214:iscsi_parse_param: *ERROR*: Key name length is bigger than 63 00:04:05.692 [2024-07-14 21:03:17.115731] /home/vagrant/spdk_repo/spdk/lib/iscsi/param.c: 228:iscsi_parse_param: *ERROR*: Duplicated Key B 00:04:05.692 21:03:17 unittest.unittest_iscsi -- unit/unittest.sh@70 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/tgt_node.c/tgt_node_ut 00:04:05.692 00:04:05.692 00:04:05.692 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.692 http://cunit.sourceforge.net/ 00:04:05.692 00:04:05.692 00:04:05.692 Suite: iscsi_target_node_suite 00:04:05.692 Test: add_lun_test_cases ...[2024-07-14 21:03:17.120109] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1253:iscsi_tgt_node_add_lun: *ERROR*: Target has active connections (count=1) 00:04:05.692 [2024-07-14 21:03:17.120254] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1258:iscsi_tgt_node_add_lun: *ERROR*: Specified LUN ID (-2) is negative 00:04:05.692 [2024-07-14 21:03:17.120276] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:04:05.692 [2024-07-14 21:03:17.120289] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1264:iscsi_tgt_node_add_lun: *ERROR*: SCSI device is not found 00:04:05.692 [2024-07-14 21:03:17.120297] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1270:iscsi_tgt_node_add_lun: *ERROR*: spdk_scsi_dev_add_lun failed 00:04:05.692 passed 00:04:05.692 Test: allow_any_allowed ...passed 00:04:05.692 Test: allow_ipv6_allowed ...passed 00:04:05.692 Test: allow_ipv6_denied ...passed 00:04:05.692 Test: allow_ipv6_invalid ...passed 00:04:05.692 Test: allow_ipv4_allowed ...passed 00:04:05.692 Test: allow_ipv4_denied ...passed 00:04:05.692 Test: allow_ipv4_invalid ...passed 00:04:05.692 Test: node_access_allowed ...passed 00:04:05.692 Test: node_access_denied_by_empty_netmask ...passed 00:04:05.692 Test: node_access_multi_initiator_groups_cases ...passed 00:04:05.692 Test: allow_iscsi_name_multi_maps_case ...passed 00:04:05.692 Test: chap_param_test_cases ...passed 00:04:05.692 00:04:05.692 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.692 suites 1 1 n/a 0 0 00:04:05.692 tests 13 13 13 0 0 00:04:05.692 asserts 50 50 50 0 n/a 00:04:05.692 00:04:05.692 Elapsed time = 0.000 seconds 00:04:05.692 [2024-07-14 21:03:17.120394] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=0) 00:04:05.692 [2024-07-14 21:03:17.120406] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=0,r=0,m=1) 00:04:05.692 [2024-07-14 21:03:17.120414] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=0,m=1) 00:04:05.692 [2024-07-14 21:03:17.120422] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1040:iscsi_check_chap_params: *ERROR*: Invalid combination of CHAP params (d=1,r=1,m=1) 00:04:05.692 [2024-07-14 21:03:17.120431] /home/vagrant/spdk_repo/spdk/lib/iscsi/tgt_node.c:1030:iscsi_check_chap_params: *ERROR*: Invalid auth group ID (-1) 00:04:05.692 21:03:17 unittest.unittest_iscsi -- unit/unittest.sh@71 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/iscsi.c/iscsi_ut 00:04:05.692 00:04:05.692 00:04:05.692 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.692 http://cunit.sourceforge.net/ 00:04:05.692 00:04:05.692 00:04:05.692 Suite: iscsi_suite 00:04:05.692 Test: op_login_check_target_test ...passed 00:04:05.692 Test: op_login_session_normal_test ...[2024-07-14 21:03:17.127902] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1439:iscsi_op_login_check_target: *ERROR*: access denied 00:04:05.692 [2024-07-14 21:03:17.128175] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:04:05.692 [2024-07-14 21:03:17.128197] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:04:05.692 [2024-07-14 21:03:17.128212] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1636:iscsi_op_login_session_normal: *ERROR*: TargetName is empty 00:04:05.692 [2024-07-14 21:03:17.128271] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 695:append_iscsi_sess: *ERROR*: spdk_get_iscsi_sess_by_tsih failed 00:04:05.692 [2024-07-14 21:03:17.128295] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:04:05.692 [2024-07-14 21:03:17.128331] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c: 703:append_iscsi_sess: *ERROR*: no MCS session for init port name=iqn.2017-11.spdk.io:i0001, tsih=256, cid=0 00:04:05.692 [2024-07-14 21:03:17.128347] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1475:iscsi_op_login_check_session: *ERROR*: isid=0, tsih=256, cid=0:spdk_append_iscsi_sess() failed 00:04:05.692 passed 00:04:05.692 Test: maxburstlength_test ...passed 00:04:05.692 Test: underflow_for_read_transfer_test ...passed 00:04:05.692 Test: underflow_for_zero_read_transfer_test ...passed 00:04:05.692 Test: underflow_for_request_sense_test ...passed 00:04:05.692 Test: underflow_for_check_condition_test ...passed 00:04:05.692 Test: add_transfer_task_test ...passed 00:04:05.692 Test: get_transfer_task_test ...passed 00:04:05.692 Test: del_transfer_task_test ...passed 00:04:05.692 Test: clear_all_transfer_tasks_test ...[2024-07-14 21:03:17.128442] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:04:05.692 [2024-07-14 21:03:17.128460] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4569:iscsi_pdu_hdr_handle: *ERROR*: processing PDU header (opcode=5) failed on NULL(NULL) 00:04:05.692 passed 00:04:05.692 Test: build_iovs_test ...passed 00:04:05.692 Test: build_iovs_with_md_test ...passed 00:04:05.692 Test: pdu_hdr_op_login_test ...passed 00:04:05.692 Test: pdu_hdr_op_text_test ...[2024-07-14 21:03:17.128680] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1256:iscsi_op_login_rsp_init: *ERROR*: transit error 00:04:05.692 [2024-07-14 21:03:17.128702] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1264:iscsi_op_login_rsp_init: *ERROR*: unsupported version min 1/max 0, expecting 0 00:04:05.692 [2024-07-14 21:03:17.128717] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:1277:iscsi_op_login_rsp_init: *ERROR*: Received reserved NSG code: 2 00:04:05.692 [2024-07-14 21:03:17.128743] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2259:iscsi_pdu_hdr_op_text: *ERROR*: data segment len(=69) > immediate data len(=68) 00:04:05.692 passed 00:04:05.692 Test: pdu_hdr_op_logout_test ...passed 00:04:05.692 Test: pdu_hdr_op_scsi_test ...[2024-07-14 21:03:17.128759] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2290:iscsi_pdu_hdr_op_text: *ERROR*: final and continue 00:04:05.692 [2024-07-14 21:03:17.128774] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2304:iscsi_pdu_hdr_op_text: *ERROR*: The correct itt is 5679, and the current itt is 5678... 00:04:05.692 [2024-07-14 21:03:17.128792] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:2535:iscsi_pdu_hdr_op_logout: *ERROR*: Target can accept logout only with reason "close the session" on discovery session. 1 is not acceptable reason. 00:04:05.692 [2024-07-14 21:03:17.128813] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:04:05.692 [2024-07-14 21:03:17.128827] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3354:iscsi_pdu_hdr_op_scsi: *ERROR*: ISCSI_OP_SCSI not allowed in discovery and invalid session 00:04:05.692 [2024-07-14 21:03:17.128841] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3382:iscsi_pdu_hdr_op_scsi: *ERROR*: Bidirectional CDB is not supported 00:04:05.692 [2024-07-14 21:03:17.128856] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3416:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=69) > immediate data len(=68) 00:04:05.692 [2024-07-14 21:03:17.128871] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3423:iscsi_pdu_hdr_op_scsi: *ERROR*: data segment len(=68) > task transfer len(=67) 00:04:05.692 [2024-07-14 21:03:17.128888] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3446:iscsi_pdu_hdr_op_scsi: *ERROR*: Reject scsi cmd with EDTL > 0 but (R | W) == 0 00:04:05.692 passed 00:04:05.692 Test: pdu_hdr_op_task_mgmt_test ...passed 00:04:05.692 Test: pdu_hdr_op_nopout_test ...passed 00:04:05.692 Test: pdu_hdr_op_data_test ...passed 00:04:05.692 Test: empty_text_with_cbit_test ...passed 00:04:05.692 Test: pdu_payload_read_test ...[2024-07-14 21:03:17.128906] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3623:iscsi_pdu_hdr_op_task: *ERROR*: ISCSI_OP_TASK not allowed in discovery and invalid session 00:04:05.692 [2024-07-14 21:03:17.128932] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3712:iscsi_pdu_hdr_op_task: *ERROR*: unsupported function 0 00:04:05.692 [2024-07-14 21:03:17.128947] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3731:iscsi_pdu_hdr_op_nopout: *ERROR*: ISCSI_OP_NOPOUT not allowed in discovery session 00:04:05.692 [2024-07-14 21:03:17.128955] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:04:05.692 [2024-07-14 21:03:17.128961] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3753:iscsi_pdu_hdr_op_nopout: *ERROR*: invalid transfer tag 0x4d3 00:04:05.692 [2024-07-14 21:03:17.128967] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:3761:iscsi_pdu_hdr_op_nopout: *ERROR*: got NOPOUT ITT=0xffffffff, I=0 00:04:05.692 [2024-07-14 21:03:17.128976] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4204:iscsi_pdu_hdr_op_data: *ERROR*: ISCSI_OP_SCSI_DATAOUT not allowed in discovery session 00:04:05.692 [2024-07-14 21:03:17.128986] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4221:iscsi_pdu_hdr_op_data: *ERROR*: Not found task for transfer_tag=0 00:04:05.692 [2024-07-14 21:03:17.128993] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4229:iscsi_pdu_hdr_op_data: *ERROR*: the dataout pdu data length is larger than the value sent by R2T PDU 00:04:05.692 [2024-07-14 21:03:17.129000] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4235:iscsi_pdu_hdr_op_data: *ERROR*: The r2t task tag is 0, and the dataout task tag is 1 00:04:05.692 [2024-07-14 21:03:17.129007] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4240:iscsi_pdu_hdr_op_data: *ERROR*: DataSN(1) exp=0 error 00:04:05.692 [2024-07-14 21:03:17.129014] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4251:iscsi_pdu_hdr_op_data: *ERROR*: offset(4096) error 00:04:05.692 [2024-07-14 21:03:17.129021] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4263:iscsi_pdu_hdr_op_data: *ERROR*: R2T burst(65536) > MaxBurstLength(65535) 00:04:05.692 passed 00:04:05.692 Test: data_out_pdu_sequence_test ...[2024-07-14 21:03:17.129249] /home/vagrant/spdk_repo/spdk/lib/iscsi/iscsi.c:4650:iscsi_pdu_payload_read: *ERROR*: Data(65537) > MaxSegment(65536) 00:04:05.692 passed 00:04:05.692 Test: immediate_data_and_data_out_pdu_sequence_test ...passed 00:04:05.692 00:04:05.692 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.692 suites 1 1 n/a 0 0 00:04:05.692 tests 24 24 24 0 0 00:04:05.692 asserts 150253 150253 150253 0 n/a 00:04:05.692 00:04:05.692 Elapsed time = 0.008 seconds 00:04:05.692 21:03:17 unittest.unittest_iscsi -- unit/unittest.sh@72 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/init_grp.c/init_grp_ut 00:04:05.692 00:04:05.692 00:04:05.692 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.692 http://cunit.sourceforge.net/ 00:04:05.692 00:04:05.692 00:04:05.692 Suite: init_grp_suite 00:04:05.692 Test: create_initiator_group_success_case ...passed 00:04:05.692 Test: find_initiator_group_success_case ...passed 00:04:05.692 Test: register_initiator_group_twice_case ...passed 00:04:05.692 Test: add_initiator_name_success_case ...passed 00:04:05.692 Test: add_initiator_name_fail_case ...[2024-07-14 21:03:17.136766] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 54:iscsi_init_grp_add_initiator: *ERROR*: > MAX_INITIATOR(=256) is not allowed 00:04:05.692 passed 00:04:05.692 Test: delete_all_initiator_names_success_case ...passed 00:04:05.692 Test: add_netmask_success_case ...passed 00:04:05.692 Test: add_netmask_fail_case ...passed 00:04:05.692 Test: delete_all_netmasks_success_case ...passed 00:04:05.693 Test: initiator_name_overwrite_all_to_any_case ...passed 00:04:05.693 Test: netmask_overwrite_all_to_any_case ...passed 00:04:05.693 Test: add_delete_initiator_names_case ...passed 00:04:05.693 Test: add_duplicated_initiator_names_case ...passed 00:04:05.693 Test: delete_nonexisting_initiator_names_case ...passed 00:04:05.693 Test: add_delete_netmasks_case ...passed 00:04:05.693 Test: add_duplicated_netmasks_case ...passed 00:04:05.693 Test: delete_nonexisting_netmasks_case ...passed 00:04:05.693 00:04:05.693 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.693 suites 1 1 n/a 0 0 00:04:05.693 tests 17 17 17 0 0 00:04:05.693 asserts 108 108 108 0 n/a 00:04:05.693 00:04:05.693 Elapsed time = 0.000 seconds 00:04:05.693 [2024-07-14 21:03:17.137027] /home/vagrant/spdk_repo/spdk/lib/iscsi/init_grp.c: 188:iscsi_init_grp_add_netmask: *ERROR*: > MAX_NETMASK(=256) is not allowed 00:04:05.693 21:03:17 unittest.unittest_iscsi -- unit/unittest.sh@73 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/iscsi/portal_grp.c/portal_grp_ut 00:04:05.693 00:04:05.693 00:04:05.693 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.693 http://cunit.sourceforge.net/ 00:04:05.693 00:04:05.693 00:04:05.693 Suite: portal_grp_suite 00:04:05.693 Test: portal_create_ipv4_normal_case ...passed 00:04:05.693 Test: portal_create_ipv6_normal_case ...passed 00:04:05.693 Test: portal_create_ipv4_wildcard_case ...passed 00:04:05.693 Test: portal_create_ipv6_wildcard_case ...passed 00:04:05.693 Test: portal_create_twice_case ...[2024-07-14 21:03:17.141979] /home/vagrant/spdk_repo/spdk/lib/iscsi/portal_grp.c: 113:iscsi_portal_create: *ERROR*: portal (192.168.2.0, 3260) already exists 00:04:05.693 passed 00:04:05.693 Test: portal_grp_register_unregister_case ...passed 00:04:05.693 Test: portal_grp_register_twice_case ...passed 00:04:05.693 Test: portal_grp_add_delete_case ...passed 00:04:05.693 Test: portal_grp_add_delete_twice_case ...passed 00:04:05.693 00:04:05.693 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.693 suites 1 1 n/a 0 0 00:04:05.693 tests 9 9 9 0 0 00:04:05.693 asserts 44 44 44 0 n/a 00:04:05.693 00:04:05.693 Elapsed time = 0.000 seconds 00:04:05.693 00:04:05.693 real 0m0.039s 00:04:05.693 user 0m0.011s 00:04:05.693 sys 0m0.030s 00:04:05.693 21:03:17 unittest.unittest_iscsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.693 21:03:17 unittest.unittest_iscsi -- common/autotest_common.sh@10 -- # set +x 00:04:05.693 ************************************ 00:04:05.693 END TEST unittest_iscsi 00:04:05.693 ************************************ 00:04:05.693 21:03:17 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:05.693 21:03:17 unittest -- unit/unittest.sh@245 -- # run_test unittest_json unittest_json 00:04:05.693 21:03:17 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.693 21:03:17 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.693 21:03:17 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:05.693 ************************************ 00:04:05.693 START TEST unittest_json 00:04:05.693 ************************************ 00:04:05.693 21:03:17 unittest.unittest_json -- common/autotest_common.sh@1123 -- # unittest_json 00:04:05.693 21:03:17 unittest.unittest_json -- unit/unittest.sh@77 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_parse.c/json_parse_ut 00:04:05.693 00:04:05.693 00:04:05.693 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.693 http://cunit.sourceforge.net/ 00:04:05.693 00:04:05.693 00:04:05.693 Suite: json 00:04:05.693 Test: test_parse_literal ...passed 00:04:05.693 Test: test_parse_string_simple ...passed 00:04:05.693 Test: test_parse_string_control_chars ...passed 00:04:05.693 Test: test_parse_string_utf8 ...passed 00:04:05.693 Test: test_parse_string_escapes_twochar ...passed 00:04:05.693 Test: test_parse_string_escapes_unicode ...passed 00:04:05.693 Test: test_parse_number ...passed 00:04:05.693 Test: test_parse_array ...passed 00:04:05.693 Test: test_parse_object ...passed 00:04:05.693 Test: test_parse_nesting ...passed 00:04:05.693 Test: test_parse_comment ...passed 00:04:05.693 00:04:05.693 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.693 suites 1 1 n/a 0 0 00:04:05.693 tests 11 11 11 0 0 00:04:05.693 asserts 1516 1516 1516 0 n/a 00:04:05.693 00:04:05.693 Elapsed time = 0.000 seconds 00:04:05.693 21:03:17 unittest.unittest_json -- unit/unittest.sh@78 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_util.c/json_util_ut 00:04:05.693 00:04:05.693 00:04:05.693 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.693 http://cunit.sourceforge.net/ 00:04:05.693 00:04:05.693 00:04:05.693 Suite: json 00:04:05.693 Test: test_strequal ...passed 00:04:05.693 Test: test_num_to_uint16 ...passed 00:04:05.693 Test: test_num_to_int32 ...passed 00:04:05.693 Test: test_num_to_uint64 ...passed 00:04:05.693 Test: test_decode_object ...passed 00:04:05.693 Test: test_decode_array ...passed 00:04:05.693 Test: test_decode_bool ...passed 00:04:05.693 Test: test_decode_uint16 ...passed 00:04:05.693 Test: test_decode_int32 ...passed 00:04:05.693 Test: test_decode_uint32 ...passed 00:04:05.693 Test: test_decode_uint64 ...passed 00:04:05.693 Test: test_decode_string ...passed 00:04:05.693 Test: test_decode_uuid ...passed 00:04:05.693 Test: test_find ...passed 00:04:05.693 Test: test_find_array ...passed 00:04:05.693 Test: test_iterating ...passed 00:04:05.693 Test: test_free_object ...passed 00:04:05.693 00:04:05.693 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.693 suites 1 1 n/a 0 0 00:04:05.693 tests 17 17 17 0 0 00:04:05.693 asserts 236 236 236 0 n/a 00:04:05.693 00:04:05.693 Elapsed time = 0.000 seconds 00:04:05.693 21:03:17 unittest.unittest_json -- unit/unittest.sh@79 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/json/json_write.c/json_write_ut 00:04:05.693 00:04:05.693 00:04:05.693 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.693 http://cunit.sourceforge.net/ 00:04:05.693 00:04:05.693 00:04:05.693 Suite: json 00:04:05.693 Test: test_write_literal ...passed 00:04:05.693 Test: test_write_string_simple ...passed 00:04:05.693 Test: test_write_string_escapes ...passed 00:04:05.693 Test: test_write_string_utf16le ...passed 00:04:05.693 Test: test_write_number_int32 ...passed 00:04:05.693 Test: test_write_number_uint32 ...passed 00:04:05.693 Test: test_write_number_uint128 ...passed 00:04:05.693 Test: test_write_string_number_uint128 ...passed 00:04:05.693 Test: test_write_number_int64 ...passed 00:04:05.693 Test: test_write_number_uint64 ...passed 00:04:05.693 Test: test_write_number_double ...passed 00:04:05.693 Test: test_write_uuid ...passed 00:04:05.693 Test: test_write_array ...passed 00:04:05.693 Test: test_write_object ...passed 00:04:05.693 Test: test_write_nesting ...passed 00:04:05.693 Test: test_write_val ...passed 00:04:05.693 00:04:05.693 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.693 suites 1 1 n/a 0 0 00:04:05.693 tests 16 16 16 0 0 00:04:05.693 asserts 918 918 918 0 n/a 00:04:05.693 00:04:05.693 Elapsed time = 0.000 seconds 00:04:05.693 21:03:17 unittest.unittest_json -- unit/unittest.sh@80 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/jsonrpc/jsonrpc_server.c/jsonrpc_server_ut 00:04:05.693 00:04:05.693 00:04:05.693 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.693 http://cunit.sourceforge.net/ 00:04:05.693 00:04:05.693 00:04:05.693 Suite: jsonrpc 00:04:05.693 Test: test_parse_request ...passed 00:04:05.693 Test: test_parse_request_streaming ...passed 00:04:05.693 00:04:05.693 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.693 suites 1 1 n/a 0 0 00:04:05.693 tests 2 2 2 0 0 00:04:05.693 asserts 289 289 289 0 n/a 00:04:05.693 00:04:05.693 Elapsed time = 0.000 seconds 00:04:05.693 00:04:05.693 real 0m0.029s 00:04:05.693 user 0m0.007s 00:04:05.693 sys 0m0.029s 00:04:05.693 21:03:17 unittest.unittest_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.693 ************************************ 00:04:05.693 END TEST unittest_json 00:04:05.693 ************************************ 00:04:05.693 21:03:17 unittest.unittest_json -- common/autotest_common.sh@10 -- # set +x 00:04:05.951 21:03:17 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:05.951 21:03:17 unittest -- unit/unittest.sh@246 -- # run_test unittest_rpc unittest_rpc 00:04:05.951 21:03:17 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.951 21:03:17 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.951 21:03:17 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:05.951 ************************************ 00:04:05.951 START TEST unittest_rpc 00:04:05.951 ************************************ 00:04:05.951 21:03:17 unittest.unittest_rpc -- common/autotest_common.sh@1123 -- # unittest_rpc 00:04:05.951 21:03:17 unittest.unittest_rpc -- unit/unittest.sh@84 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rpc/rpc.c/rpc_ut 00:04:05.951 00:04:05.951 00:04:05.951 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.951 http://cunit.sourceforge.net/ 00:04:05.951 00:04:05.951 00:04:05.951 Suite: rpc 00:04:05.951 Test: test_jsonrpc_handler ...passed 00:04:05.951 Test: test_spdk_rpc_is_method_allowed ...passed 00:04:05.951 Test: test_rpc_get_methods ...passed 00:04:05.951 Test: test_rpc_spdk_get_version ...passed 00:04:05.951 Test: test_spdk_rpc_listen_close ...passed 00:04:05.951 Test: test_rpc_run_multiple_servers ...passed 00:04:05.951 00:04:05.951 [2024-07-14 21:03:17.266470] /home/vagrant/spdk_repo/spdk/lib/rpc/rpc.c: 446:rpc_get_methods: *ERROR*: spdk_json_decode_object failed 00:04:05.951 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.951 suites 1 1 n/a 0 0 00:04:05.951 tests 6 6 6 0 0 00:04:05.951 asserts 23 23 23 0 n/a 00:04:05.951 00:04:05.951 Elapsed time = 0.000 seconds 00:04:05.951 00:04:05.951 real 0m0.006s 00:04:05.951 user 0m0.001s 00:04:05.951 sys 0m0.004s 00:04:05.951 ************************************ 00:04:05.951 END TEST unittest_rpc 00:04:05.951 ************************************ 00:04:05.951 21:03:17 unittest.unittest_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.951 21:03:17 unittest.unittest_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:05.951 21:03:17 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:05.951 21:03:17 unittest -- unit/unittest.sh@247 -- # run_test unittest_notify /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:04:05.951 21:03:17 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.951 21:03:17 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.951 21:03:17 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:05.951 ************************************ 00:04:05.951 START TEST unittest_notify 00:04:05.951 ************************************ 00:04:05.951 21:03:17 unittest.unittest_notify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/notify/notify.c/notify_ut 00:04:05.951 00:04:05.951 00:04:05.951 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.951 http://cunit.sourceforge.net/ 00:04:05.951 00:04:05.951 00:04:05.951 Suite: app_suite 00:04:05.951 Test: notify ...passed 00:04:05.951 00:04:05.951 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.951 suites 1 1 n/a 0 0 00:04:05.951 tests 1 1 1 0 0 00:04:05.951 asserts 13 13 13 0 n/a 00:04:05.951 00:04:05.951 Elapsed time = 0.000 seconds 00:04:05.951 00:04:05.951 real 0m0.006s 00:04:05.951 user 0m0.005s 00:04:05.951 sys 0m0.008s 00:04:05.951 21:03:17 unittest.unittest_notify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.951 ************************************ 00:04:05.951 END TEST unittest_notify 00:04:05.951 21:03:17 unittest.unittest_notify -- common/autotest_common.sh@10 -- # set +x 00:04:05.951 ************************************ 00:04:05.951 21:03:17 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:05.951 21:03:17 unittest -- unit/unittest.sh@248 -- # run_test unittest_nvme unittest_nvme 00:04:05.951 21:03:17 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.951 21:03:17 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.951 21:03:17 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:05.951 ************************************ 00:04:05.951 START TEST unittest_nvme 00:04:05.951 ************************************ 00:04:05.951 21:03:17 unittest.unittest_nvme -- common/autotest_common.sh@1123 -- # unittest_nvme 00:04:05.951 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@88 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme.c/nvme_ut 00:04:05.951 00:04:05.951 00:04:05.951 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.951 http://cunit.sourceforge.net/ 00:04:05.951 00:04:05.951 00:04:05.951 Suite: nvme 00:04:05.951 Test: test_opc_data_transfer ...passed 00:04:05.951 Test: test_spdk_nvme_transport_id_parse_trtype ...passed 00:04:05.951 Test: test_spdk_nvme_transport_id_parse_adrfam ...passed 00:04:05.951 Test: test_trid_parse_and_compare ...[2024-07-14 21:03:17.372066] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1199:parse_next_key: *ERROR*: Key without ':' or '=' separator 00:04:05.951 [2024-07-14 21:03:17.372322] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:04:05.951 [2024-07-14 21:03:17.372347] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1212:parse_next_key: *ERROR*: Key length 32 greater than maximum allowed 31 00:04:05.951 passed 00:04:05.951 Test: test_trid_trtype_str ...passed 00:04:05.951 Test: test_trid_adrfam_str ...passed 00:04:05.951 Test: test_nvme_ctrlr_probe ...[2024-07-14 21:03:17.372363] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:04:05.951 [2024-07-14 21:03:17.372378] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1222:parse_next_key: *ERROR*: Key without value 00:04:05.951 [2024-07-14 21:03:17.372392] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1256:spdk_nvme_transport_id_parse: *ERROR*: Failed to parse transport ID 00:04:05.951 passed 00:04:05.951 Test: test_spdk_nvme_probe ...passed 00:04:05.951 Test: test_spdk_nvme_connect ...passed 00:04:05.951 Test: test_nvme_ctrlr_probe_internal ...[2024-07-14 21:03:17.372555] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:04:05.951 [2024-07-14 21:03:17.372592] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:04:05.951 [2024-07-14 21:03:17.372607] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:04:05.951 [2024-07-14 21:03:17.372625] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 822:nvme_probe_internal: *ERROR*: NVMe trtype 256 (PCIE) not available 00:04:05.951 [2024-07-14 21:03:17.372639] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:04:05.951 [2024-07-14 21:03:17.372678] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1010:spdk_nvme_connect: *ERROR*: No transport ID specified 00:04:05.951 [2024-07-14 21:03:17.372759] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:04:05.951 passed 00:04:05.951 Test: test_nvme_init_controllers ...passed 00:04:05.951 Test: test_nvme_driver_init ...[2024-07-14 21:03:17.372784] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 00:04:05.951 [2024-07-14 21:03:17.372794] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:04:05.951 [2024-07-14 21:03:17.372808] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 00:04:05.951 [2024-07-14 21:03:17.372824] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 578:nvme_driver_init: *ERROR*: primary process failed to reserve memory 00:04:05.951 [2024-07-14 21:03:17.372834] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 601:nvme_driver_init: *ERROR*: primary process is not started yet 00:04:05.951 passed 00:04:05.951 Test: test_spdk_nvme_detach ...passed 00:04:05.951 Test: test_nvme_completion_poll_cb ...passed 00:04:05.951 Test: test_nvme_user_copy_cmd_complete ...[2024-07-14 21:03:17.488319] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c: 596:nvme_driver_init: *ERROR*: timeout waiting for primary process to init 00:04:05.951 passed 00:04:05.951 Test: test_nvme_allocate_request_null ...passed 00:04:05.952 Test: test_nvme_allocate_request ...passed 00:04:05.952 Test: test_nvme_free_request ...passed 00:04:05.952 Test: test_nvme_allocate_request_user_copy ...passed 00:04:05.952 Test: test_nvme_robust_mutex_init_shared ...passed 00:04:05.952 Test: test_nvme_request_check_timeout ...passed 00:04:05.952 Test: test_nvme_wait_for_completion ...passed 00:04:05.952 Test: test_spdk_nvme_parse_func ...passed 00:04:05.952 Test: test_spdk_nvme_detach_async ...passed 00:04:05.952 Test: test_nvme_parse_addr ...passed 00:04:05.952 00:04:05.952 Run Summary: Type Total Ran Passed Failed Inactive 00:04:05.952 suites 1 1 n/a 0 0 00:04:05.952 tests 25 25 25 0 0 00:04:05.952 asserts 326 326 326 0 n/a 00:04:05.952 00:04:05.952 Elapsed time = 0.000 seconds 00:04:05.952 [2024-07-14 21:03:17.488567] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme.c:1609:nvme_parse_addr: *ERROR*: addr and service must both be non-NULL 00:04:05.952 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@89 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr.c/nvme_ctrlr_ut 00:04:05.952 00:04:05.952 00:04:05.952 CUnit - A unit testing framework for C - Version 2.1-3 00:04:05.952 http://cunit.sourceforge.net/ 00:04:05.952 00:04:05.952 00:04:05.952 Suite: nvme_ctrlr 00:04:05.952 Test: test_nvme_ctrlr_init_en_1_rdy_0 ...[2024-07-14 21:03:17.496300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:05.952 passed 00:04:05.952 Test: test_nvme_ctrlr_init_en_1_rdy_1 ...[2024-07-14 21:03:17.498234] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_init_en_0_rdy_0 ...[2024-07-14 21:03:17.499449] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_init_en_0_rdy_1 ...[2024-07-14 21:03:17.500653] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_rr ...[2024-07-14 21:03:17.501946] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 [2024-07-14 21:03:17.503190] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-14 21:03:17.504426] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-14 21:03:17.505602] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:04:06.211 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_wrr ...[2024-07-14 21:03:17.508091] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 [2024-07-14 21:03:17.510432] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-14 21:03:17.511680] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:04:06.211 Test: test_nvme_ctrlr_init_en_0_rdy_0_ams_vs ...[2024-07-14 21:03:17.514120] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 [2024-07-14 21:03:17.515349] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22[2024-07-14 21:03:17.517741] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr enable failed with error: -22passed 00:04:06.211 Test: test_nvme_ctrlr_init_delay ...[2024-07-14 21:03:17.520211] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_alloc_io_qpair_rr_1 ...[2024-07-14 21:03:17.521487] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 [2024-07-14 21:03:17.521558] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:04:06.211 passed 00:04:06.211 Test: test_ctrlr_get_default_ctrlr_opts ...passed 00:04:06.211 Test: test_ctrlr_get_default_io_qpair_opts ...passed 00:04:06.211 Test: test_alloc_io_qpair_wrr_1 ...passed 00:04:06.211 Test: test_alloc_io_qpair_wrr_2 ...passed 00:04:06.211 Test: test_spdk_nvme_ctrlr_update_firmware ...[2024-07-14 21:03:17.521585] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:04:06.211 [2024-07-14 21:03:17.521602] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:04:06.211 [2024-07-14 21:03:17.521617] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 394:nvme_ctrlr_create_io_qpair: *ERROR*: [] invalid queue priority for default round robin arbitration method 00:04:06.211 [2024-07-14 21:03:17.521687] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 [2024-07-14 21:03:17.521733] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 [2024-07-14 21:03:17.521754] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [] No free I/O queue IDs 00:04:06.211 [2024-07-14 21:03:17.521803] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4993:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_update_firmware invalid size! 00:04:06.211 [2024-07-14 21:03:17.521821] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_fail ...passed 00:04:06.211 Test: test_nvme_ctrlr_construct_intel_support_log_page_list ...passed 00:04:06.211 Test: test_nvme_ctrlr_set_supported_features ...passed 00:04:06.211 Test: test_nvme_ctrlr_set_host_feature ...[2024-07-14 21:03:17.521837] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5070:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] nvme_ctrlr_cmd_fw_commit failed! 00:04:06.211 [2024-07-14 21:03:17.521858] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:5030:spdk_nvme_ctrlr_update_firmware: *ERROR*: [] spdk_nvme_ctrlr_fw_image_download failed! 00:04:06.211 [2024-07-14 21:03:17.521887] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [] in failed state. 00:04:06.211 [2024-07-14 21:03:17.521924] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_spdk_nvme_ctrlr_doorbell_buffer_config ...passed 00:04:06.211 Test: test_nvme_ctrlr_test_active_ns ...[2024-07-14 21:03:17.523141] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_test_active_ns_error_case ...passed 00:04:06.211 Test: test_spdk_nvme_ctrlr_reconnect_io_qpair ...passed 00:04:06.211 Test: test_spdk_nvme_ctrlr_set_trid ...passed 00:04:06.211 Test: test_nvme_ctrlr_init_set_nvmf_ioccsz ...[2024-07-14 21:03:17.560154] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_init_set_num_queues ...[2024-07-14 21:03:17.567117] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_init_set_keep_alive_timeout ...[2024-07-14 21:03:17.568340] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 [2024-07-14 21:03:17.568372] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:3003:nvme_ctrlr_set_keep_alive_timeout_done: *ERROR*: [] Keep alive timeout Get Feature failed: SC 6 SCT 0 00:04:06.211 passed 00:04:06.211 Test: test_alloc_io_qpair_fail ...[2024-07-14 21:03:17.569579] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 [2024-07-14 21:03:17.569635] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c: 506:spdk_nvme_ctrlr_alloc_io_qpair: *ERROR*: [] nvme_transport_ctrlr_connect_io_qpair() failed 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_add_remove_process ...passed 00:04:06.211 Test: test_nvme_ctrlr_set_arbitration_feature ...passed 00:04:06.211 Test: test_nvme_ctrlr_set_state ...passed 00:04:06.211 Test: test_nvme_ctrlr_active_ns_list_v0 ...[2024-07-14 21:03:17.569698] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:1547:_nvme_ctrlr_set_state: *ERROR*: [] Specified timeout would cause integer overflow. Defaulting to no timeout. 00:04:06.211 [2024-07-14 21:03:17.569720] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_active_ns_list_v2 ...[2024-07-14 21:03:17.572471] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_ns_mgmt ...[2024-07-14 21:03:17.579675] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_reset ...[2024-07-14 21:03:17.580925] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_aer_callback ...[2024-07-14 21:03:17.581025] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_ns_attr_changed ...[2024-07-14 21:03:17.582256] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_identify_namespaces_iocs_specific_next ...passed 00:04:06.211 Test: test_nvme_ctrlr_set_supported_log_pages ...passed 00:04:06.211 Test: test_nvme_ctrlr_set_intel_supported_log_pages ...[2024-07-14 21:03:17.583565] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_parse_ana_log_page ...passed 00:04:06.211 Test: test_nvme_ctrlr_ana_resize ...[2024-07-14 21:03:17.584759] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_get_memory_domains ...passed 00:04:06.211 Test: test_nvme_transport_ctrlr_ready ...[2024-07-14 21:03:17.586021] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4152:nvme_ctrlr_process_init: *ERROR*: [] Transport controller ready step failed: rc -1 00:04:06.211 passed 00:04:06.211 Test: test_nvme_ctrlr_disable ...[2024-07-14 21:03:17.586054] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4205:nvme_ctrlr_process_init: *ERROR*: [] Ctrlr operation failed with error: -1, ctrlr state: 53 (error) 00:04:06.211 [2024-07-14 21:03:17.586070] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr.c:4274:nvme_ctrlr_construct: *ERROR*: [] admin_queue_size 0 is less than minimum defined by NVMe spec, use min value 00:04:06.211 passed 00:04:06.211 00:04:06.211 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.211 suites 1 1 n/a 0 0 00:04:06.211 tests 44 44 44 0 0 00:04:06.211 asserts 10434 10434 10434 0 n/a 00:04:06.211 00:04:06.211 Elapsed time = 0.039 seconds 00:04:06.211 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@90 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_cmd.c/nvme_ctrlr_cmd_ut 00:04:06.211 00:04:06.211 00:04:06.211 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.211 http://cunit.sourceforge.net/ 00:04:06.211 00:04:06.211 00:04:06.211 Suite: nvme_ctrlr_cmd 00:04:06.211 Test: test_get_log_pages ...passed 00:04:06.212 Test: test_set_feature_cmd ...passed 00:04:06.212 Test: test_set_feature_ns_cmd ...passed 00:04:06.212 Test: test_get_feature_cmd ...passed 00:04:06.212 Test: test_get_feature_ns_cmd ...passed 00:04:06.212 Test: test_abort_cmd ...passed 00:04:06.212 Test: test_set_host_id_cmds ...[2024-07-14 21:03:17.595647] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ctrlr_cmd.c: 508:nvme_ctrlr_cmd_set_host_id: *ERROR*: Invalid host ID size 1024 00:04:06.212 passed 00:04:06.212 Test: test_io_cmd_raw_no_payload_build ...passed 00:04:06.212 Test: test_io_raw_cmd ...passed 00:04:06.212 Test: test_io_raw_cmd_with_md ...passed 00:04:06.212 Test: test_namespace_attach ...passed 00:04:06.212 Test: test_namespace_detach ...passed 00:04:06.212 Test: test_namespace_create ...passed 00:04:06.212 Test: test_namespace_delete ...passed 00:04:06.212 Test: test_doorbell_buffer_config ...passed 00:04:06.212 Test: test_format_nvme ...passed 00:04:06.212 Test: test_fw_commit ...passed 00:04:06.212 Test: test_fw_image_download ...passed 00:04:06.212 Test: test_sanitize ...passed 00:04:06.212 Test: test_directive ...passed 00:04:06.212 Test: test_nvme_request_add_abort ...passed 00:04:06.212 Test: test_spdk_nvme_ctrlr_cmd_abort ...passed 00:04:06.212 Test: test_nvme_ctrlr_cmd_identify ...passed 00:04:06.212 Test: test_spdk_nvme_ctrlr_cmd_security_receive_send ...passed 00:04:06.212 00:04:06.212 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.212 suites 1 1 n/a 0 0 00:04:06.212 tests 24 24 24 0 0 00:04:06.212 asserts 198 198 198 0 n/a 00:04:06.212 00:04:06.212 Elapsed time = 0.000 seconds 00:04:06.212 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@91 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ctrlr_ocssd_cmd.c/nvme_ctrlr_ocssd_cmd_ut 00:04:06.212 00:04:06.212 00:04:06.212 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.212 http://cunit.sourceforge.net/ 00:04:06.212 00:04:06.212 00:04:06.212 Suite: nvme_ctrlr_cmd 00:04:06.212 Test: test_geometry_cmd ...passed 00:04:06.212 Test: test_spdk_nvme_ctrlr_is_ocssd_supported ...passed 00:04:06.212 00:04:06.212 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.212 suites 1 1 n/a 0 0 00:04:06.212 tests 2 2 2 0 0 00:04:06.212 asserts 7 7 7 0 n/a 00:04:06.212 00:04:06.212 Elapsed time = 0.000 seconds 00:04:06.212 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@92 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns.c/nvme_ns_ut 00:04:06.212 00:04:06.212 00:04:06.212 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.212 http://cunit.sourceforge.net/ 00:04:06.212 00:04:06.212 00:04:06.212 Suite: nvme 00:04:06.212 Test: test_nvme_ns_construct ...passed 00:04:06.212 Test: test_nvme_ns_uuid ...passed 00:04:06.212 Test: test_nvme_ns_csi ...passed 00:04:06.212 Test: test_nvme_ns_data ...passed 00:04:06.212 Test: test_nvme_ns_set_identify_data ...passed 00:04:06.212 Test: test_spdk_nvme_ns_get_values ...passed 00:04:06.212 Test: test_spdk_nvme_ns_is_active ...passed 00:04:06.212 Test: spdk_nvme_ns_supports ...passed 00:04:06.212 Test: test_nvme_ns_has_supported_iocs_specific_data ...passed 00:04:06.212 Test: test_nvme_ctrlr_identify_ns_iocs_specific ...passed 00:04:06.212 Test: test_nvme_ctrlr_identify_id_desc ...passed 00:04:06.212 Test: test_nvme_ns_find_id_desc ...passed 00:04:06.212 00:04:06.212 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.212 suites 1 1 n/a 0 0 00:04:06.212 tests 12 12 12 0 0 00:04:06.212 asserts 95 95 95 0 n/a 00:04:06.212 00:04:06.212 Elapsed time = 0.000 seconds 00:04:06.212 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@93 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_cmd.c/nvme_ns_cmd_ut 00:04:06.212 00:04:06.212 00:04:06.212 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.212 http://cunit.sourceforge.net/ 00:04:06.212 00:04:06.212 00:04:06.212 Suite: nvme_ns_cmd 00:04:06.212 Test: split_test ...passed 00:04:06.212 Test: split_test2 ...passed 00:04:06.212 Test: split_test3 ...passed 00:04:06.212 Test: split_test4 ...passed 00:04:06.212 Test: test_nvme_ns_cmd_flush ...passed 00:04:06.212 Test: test_nvme_ns_cmd_dataset_management ...passed 00:04:06.212 Test: test_nvme_ns_cmd_copy ...passed 00:04:06.212 Test: test_io_flags ...passed 00:04:06.212 Test: test_nvme_ns_cmd_write_zeroes ...[2024-07-14 21:03:17.613184] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xfffc 00:04:06.212 passed 00:04:06.212 Test: test_nvme_ns_cmd_write_uncorrectable ...passed 00:04:06.212 Test: test_nvme_ns_cmd_reservation_register ...passed 00:04:06.212 Test: test_nvme_ns_cmd_reservation_release ...passed 00:04:06.212 Test: test_nvme_ns_cmd_reservation_acquire ...passed 00:04:06.212 Test: test_nvme_ns_cmd_reservation_report ...passed 00:04:06.212 Test: test_cmd_child_request ...passed 00:04:06.212 Test: test_nvme_ns_cmd_readv ...passed 00:04:06.212 Test: test_nvme_ns_cmd_read_with_md ...passed 00:04:06.212 Test: test_nvme_ns_cmd_writev ...passed 00:04:06.212 Test: test_nvme_ns_cmd_write_with_md ...passed 00:04:06.212 Test: test_nvme_ns_cmd_zone_append_with_md ...passed 00:04:06.212 Test: test_nvme_ns_cmd_zone_appendv_with_md ...passed 00:04:06.212 Test: test_nvme_ns_cmd_comparev ...passed 00:04:06.212 Test: test_nvme_ns_cmd_compare_and_write ...passed 00:04:06.212 Test: test_nvme_ns_cmd_compare_with_md ...passed 00:04:06.212 Test: test_nvme_ns_cmd_comparev_with_md ...passed 00:04:06.212 Test: test_nvme_ns_cmd_setup_request ...passed 00:04:06.212 Test: test_spdk_nvme_ns_cmd_readv_with_md ...passed 00:04:06.212 Test: test_spdk_nvme_ns_cmd_writev_ext ...passed 00:04:06.212 Test: test_spdk_nvme_ns_cmd_readv_ext ...passed 00:04:06.212 Test: test_nvme_ns_cmd_verify ...passed[2024-07-14 21:03:17.613398] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 292:_nvme_ns_cmd_split_request_prp: *ERROR*: child_length 200 not even multiple of lba_size 512 00:04:06.212 [2024-07-14 21:03:17.613491] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:04:06.212 [2024-07-14 21:03:17.613503] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_ns_cmd.c: 144:_is_io_flags_valid: *ERROR*: Invalid io_flags 0xffff000f 00:04:06.212 00:04:06.212 Test: test_nvme_ns_cmd_io_mgmt_send ...passed 00:04:06.212 Test: test_nvme_ns_cmd_io_mgmt_recv ...passed 00:04:06.212 00:04:06.212 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.212 suites 1 1 n/a 0 0 00:04:06.212 tests 32 32 32 0 0 00:04:06.212 asserts 550 550 550 0 n/a 00:04:06.212 00:04:06.212 Elapsed time = 0.000 seconds 00:04:06.212 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@94 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_ns_ocssd_cmd.c/nvme_ns_ocssd_cmd_ut 00:04:06.212 00:04:06.212 00:04:06.212 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.212 http://cunit.sourceforge.net/ 00:04:06.212 00:04:06.212 00:04:06.212 Suite: nvme_ns_cmd 00:04:06.212 Test: test_nvme_ocssd_ns_cmd_vector_reset ...passed 00:04:06.212 Test: test_nvme_ocssd_ns_cmd_vector_reset_single_entry ...passed 00:04:06.212 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md ...passed 00:04:06.212 Test: test_nvme_ocssd_ns_cmd_vector_read_with_md_single_entry ...passed 00:04:06.212 Test: test_nvme_ocssd_ns_cmd_vector_read ...passed 00:04:06.212 Test: test_nvme_ocssd_ns_cmd_vector_read_single_entry ...passed 00:04:06.212 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md ...passed 00:04:06.212 Test: test_nvme_ocssd_ns_cmd_vector_write_with_md_single_entry ...passed 00:04:06.212 Test: test_nvme_ocssd_ns_cmd_vector_write ...passed 00:04:06.212 Test: test_nvme_ocssd_ns_cmd_vector_write_single_entry ...passed 00:04:06.212 Test: test_nvme_ocssd_ns_cmd_vector_copy ...passed 00:04:06.212 Test: test_nvme_ocssd_ns_cmd_vector_copy_single_entry ...passed 00:04:06.212 00:04:06.212 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.212 suites 1 1 n/a 0 0 00:04:06.212 tests 12 12 12 0 0 00:04:06.212 asserts 123 123 123 0 n/a 00:04:06.212 00:04:06.212 Elapsed time = 0.000 seconds 00:04:06.212 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@95 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_qpair.c/nvme_qpair_ut 00:04:06.212 00:04:06.212 00:04:06.212 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.212 http://cunit.sourceforge.net/ 00:04:06.212 00:04:06.212 00:04:06.212 Suite: nvme_qpair 00:04:06.212 Test: test3 ...passed 00:04:06.212 Test: test_ctrlr_failed ...passed 00:04:06.212 Test: struct_packing ...passed 00:04:06.212 Test: test_nvme_qpair_process_completions ...[2024-07-14 21:03:17.626199] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:04:06.212 [2024-07-14 21:03:17.626466] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:04:06.212 passed 00:04:06.212 Test: test_nvme_completion_is_retry ...passed 00:04:06.212 Test: test_get_status_string ...passed 00:04:06.212 Test: test_nvme_qpair_add_cmd_error_injection ...passed 00:04:06.212 Test: test_nvme_qpair_submit_request ...passed 00:04:06.212 Test: test_nvme_qpair_resubmit_request_with_transport_failed ...passed 00:04:06.212 Test: test_nvme_qpair_manual_complete_request ...passed 00:04:06.212 Test: test_nvme_qpair_init_deinit ...[2024-07-14 21:03:17.626544] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 0 00:04:06.212 [2024-07-14 21:03:17.626564] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 805:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (Device not configured) on qpair id 1 00:04:06.212 [2024-07-14 21:03:17.626631] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:04:06.212 passed 00:04:06.212 Test: test_nvme_get_sgl_print_info ...passed 00:04:06.212 00:04:06.212 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.212 suites 1 1 n/a 0 0 00:04:06.212 tests 12 12 12 0 0 00:04:06.212 asserts 154 154 154 0 n/a 00:04:06.212 00:04:06.212 Elapsed time = 0.000 seconds 00:04:06.212 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@96 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie.c/nvme_pcie_ut 00:04:06.212 00:04:06.212 00:04:06.212 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.212 http://cunit.sourceforge.net/ 00:04:06.212 00:04:06.212 00:04:06.212 Suite: nvme_pcie 00:04:06.212 Test: test_prp_list_append ...passed 00:04:06.212 Test: test_nvme_pcie_hotplug_monitor ...passed 00:04:06.212 Test: test_shadow_doorbell_update ...passed 00:04:06.212 Test: test_build_contig_hw_sgl_request ...passed 00:04:06.212 Test: test_nvme_pcie_qpair_build_metadata ...passed 00:04:06.212 Test: test_nvme_pcie_qpair_build_prps_sgl_request ...passed 00:04:06.212 Test: test_nvme_pcie_qpair_build_hw_sgl_request ...passed 00:04:06.213 Test: test_nvme_pcie_qpair_build_contig_request ...passed 00:04:06.213 Test: test_nvme_pcie_ctrlr_regs_get_set ...passed 00:04:06.213 Test: test_nvme_pcie_ctrlr_map_unmap_cmb ...passed 00:04:06.213 Test: test_nvme_pcie_ctrlr_map_io_cmb ...passed 00:04:06.213 Test: test_nvme_pcie_ctrlr_map_unmap_pmr ...passed 00:04:06.213 Test: test_nvme_pcie_ctrlr_config_pmr ...passed 00:04:06.213 Test: test_nvme_pcie_ctrlr_map_io_pmr ...passed 00:04:06.213 00:04:06.213 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.213 suites 1 1 n/a 0 0 00:04:06.213 tests 14 14 14 0 0 00:04:06.213 asserts 235 235 235 0 n/a 00:04:06.213 00:04:06.213 Elapsed time = 0.000 seconds 00:04:06.213 [2024-07-14 21:03:17.631469] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:04:06.213 [2024-07-14 21:03:17.631623] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1234:nvme_pcie_prp_list_append: *ERROR*: PRP 2 not page aligned (0x900800) 00:04:06.213 [2024-07-14 21:03:17.631636] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1224:nvme_pcie_prp_list_append: *ERROR*: vtophys(0x100000) failed 00:04:06.213 [2024-07-14 21:03:17.631675] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:04:06.213 [2024-07-14 21:03:17.631693] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1218:nvme_pcie_prp_list_append: *ERROR*: out of PRP entries 00:04:06.213 [2024-07-14 21:03:17.631771] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1205:nvme_pcie_prp_list_append: *ERROR*: virt_addr 0x100001 not dword aligned 00:04:06.213 [2024-07-14 21:03:17.631799] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 442:nvme_pcie_ctrlr_map_io_cmb: *ERROR*: CMB is already in use for submission queues. 00:04:06.213 [2024-07-14 21:03:17.631812] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 521:nvme_pcie_ctrlr_map_pmr: *ERROR*: invalid base indicator register value 00:04:06.213 [2024-07-14 21:03:17.631824] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 647:nvme_pcie_ctrlr_config_pmr: *ERROR*: PMR is already disabled 00:04:06.213 [2024-07-14 21:03:17.631835] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie.c: 699:nvme_pcie_ctrlr_map_io_pmr: *ERROR*: PMR is not supported by the controller 00:04:06.213 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@97 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_poll_group.c/nvme_poll_group_ut 00:04:06.213 00:04:06.213 00:04:06.213 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.213 http://cunit.sourceforge.net/ 00:04:06.213 00:04:06.213 00:04:06.213 Suite: nvme_ns_cmd 00:04:06.213 Test: nvme_poll_group_create_test ...passed 00:04:06.213 Test: nvme_poll_group_add_remove_test ...passed 00:04:06.213 Test: nvme_poll_group_process_completions ...passed 00:04:06.213 Test: nvme_poll_group_destroy_test ...passed 00:04:06.213 Test: nvme_poll_group_get_free_stats ...passed 00:04:06.213 00:04:06.213 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.213 suites 1 1 n/a 0 0 00:04:06.213 tests 5 5 5 0 0 00:04:06.213 asserts 75 75 75 0 n/a 00:04:06.213 00:04:06.213 Elapsed time = 0.000 seconds 00:04:06.213 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@98 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_quirks.c/nvme_quirks_ut 00:04:06.213 00:04:06.213 00:04:06.213 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.213 http://cunit.sourceforge.net/ 00:04:06.213 00:04:06.213 00:04:06.213 Suite: nvme_quirks 00:04:06.213 Test: test_nvme_quirks_striping ...passed 00:04:06.213 00:04:06.213 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.213 suites 1 1 n/a 0 0 00:04:06.213 tests 1 1 1 0 0 00:04:06.213 asserts 5 5 5 0 n/a 00:04:06.213 00:04:06.213 Elapsed time = 0.000 seconds 00:04:06.213 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@99 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_tcp.c/nvme_tcp_ut 00:04:06.213 00:04:06.213 00:04:06.213 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.213 http://cunit.sourceforge.net/ 00:04:06.213 00:04:06.213 00:04:06.213 Suite: nvme_tcp 00:04:06.213 Test: test_nvme_tcp_pdu_set_data_buf ...passed 00:04:06.213 Test: test_nvme_tcp_build_iovs ...passed 00:04:06.213 Test: test_nvme_tcp_build_sgl_request ...[2024-07-14 21:03:17.647013] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x820c7f1c8, and the iovcnt=16, remaining_size=28672 00:04:06.213 passed 00:04:06.213 Test: test_nvme_tcp_pdu_set_data_buf_with_md ...passed 00:04:06.213 Test: test_nvme_tcp_build_iovs_with_md ...passed 00:04:06.213 Test: test_nvme_tcp_req_complete_safe ...passed 00:04:06.213 Test: test_nvme_tcp_req_get ...passed 00:04:06.213 Test: test_nvme_tcp_req_init ...passed 00:04:06.213 Test: test_nvme_tcp_qpair_capsule_cmd_send ...passed 00:04:06.213 Test: test_nvme_tcp_qpair_write_pdu ...passed 00:04:06.213 Test: test_nvme_tcp_qpair_set_recv_state ...passed 00:04:06.213 Test: test_nvme_tcp_alloc_reqs ...passed 00:04:06.213 Test: test_nvme_tcp_qpair_send_h2c_term_req ...passed 00:04:06.213 Test: test_nvme_tcp_pdu_ch_handle ...passed 00:04:06.213 Test: test_nvme_tcp_qpair_connect_sock ...[2024-07-14 21:03:17.647266] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c80d78 is same with the state(6) to be set 00:04:06.213 [2024-07-14 21:03:17.647308] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c80d78 is same with the state(5) to be set 00:04:06.213 [2024-07-14 21:03:17.647322] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1190:nvme_tcp_pdu_ch_handle: *ERROR*: Already received IC_RESP PDU, and we should reject this pdu=0x820c80508 00:04:06.213 [2024-07-14 21:03:17.647331] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1250:nvme_tcp_pdu_ch_handle: *ERROR*: Expected PDU header length 128, got 0 00:04:06.213 [2024-07-14 21:03:17.647340] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c80d78 is same with the state(5) to be set 00:04:06.213 [2024-07-14 21:03:17.647348] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1200:nvme_tcp_pdu_ch_handle: *ERROR*: The TCP/IP tqpair connection is not negotiated 00:04:06.213 [2024-07-14 21:03:17.647357] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c80d78 is same with the state(5) to be set 00:04:06.213 [2024-07-14 21:03:17.647366] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:04:06.213 [2024-07-14 21:03:17.647374] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c80d78 is same with the state(5) to be set 00:04:06.213 [2024-07-14 21:03:17.647386] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c80d78 is same with the state(5) to be set 00:04:06.213 [2024-07-14 21:03:17.647395] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c80d78 is same with the state(5) to be set 00:04:06.213 [2024-07-14 21:03:17.647404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c80d78 is same with the state(5) to be set 00:04:06.213 [2024-07-14 21:03:17.647412] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c80d78 is same with the state(5) to be set 00:04:06.213 [2024-07-14 21:03:17.647421] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c80d78 is same with the state(5) to be set 00:04:06.213 [2024-07-14 21:03:17.647451] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 3 00:04:06.213 [2024-07-14 21:03:17.647461] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:04:06.213 [2024-07-14 21:03:17.687144] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2345:nvme_tcp_qpair_connect_sock: *ERROR*: dst_addr nvme_parse_addr() failed 00:04:06.213 passed 00:04:06.213 Test: test_nvme_tcp_qpair_icreq_send ...passed 00:04:06.213 Test: test_nvme_tcp_c2h_payload_handle ...passed[2024-07-14 21:03:17.687238] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820c80940): PDU Sequence Error 00:04:06.213 00:04:06.213 Test: test_nvme_tcp_icresp_handle ...passed 00:04:06.213 Test: test_nvme_tcp_pdu_payload_handle ...passed 00:04:06.213 Test: test_nvme_tcp_capsule_resp_hdr_handle ...passed 00:04:06.213 Test: test_nvme_tcp_ctrlr_connect_qpair ...passed 00:04:06.213 Test: test_nvme_tcp_ctrlr_disconnect_qpair ...[2024-07-14 21:03:17.687270] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1576:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp PFV 0, got 1 00:04:06.213 [2024-07-14 21:03:17.687286] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1584:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp maxh2cdata >=4096, got 2048 00:04:06.213 [2024-07-14 21:03:17.687300] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c80d78 is same with the state(5) to be set 00:04:06.213 [2024-07-14 21:03:17.687315] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1592:nvme_tcp_icresp_handle: *ERROR*: Expected ICResp cpda <=31, got 64 00:04:06.213 [2024-07-14 21:03:17.687328] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c80d78 is same with the state(5) to be set 00:04:06.213 [2024-07-14 21:03:17.687343] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c80d78 is same with the state(0) to be set 00:04:06.213 [2024-07-14 21:03:17.687374] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1358:nvme_tcp_c2h_term_req_dump: *ERROR*: Error info of pdu(0x820c80940): PDU Sequence Error 00:04:06.213 [2024-07-14 21:03:17.687424] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1653:nvme_tcp_capsule_resp_hdr_handle: *ERROR*: no tcp_req is found with cid=1 for tqpair=0x820c80d78 00:04:06.213 [2024-07-14 21:03:17.687469] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 358:nvme_tcp_ctrlr_disconnect_qpair: *ERROR*: tqpair=0x820c7ead8, errno=0, rc=0 00:04:06.213 [2024-07-14 21:03:17.687485] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7ead8 is same with the state(5) to be set 00:04:06.213 [2024-07-14 21:03:17.687499] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 328:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x820c7ead8 is same with the state(5) to be set 00:04:06.213 passed 00:04:06.213 Test: test_nvme_tcp_ctrlr_create_io_qpair ...[2024-07-14 21:03:17.687578] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820c7ead8 (0): No error: 0 00:04:06.213 [2024-07-14 21:03:17.687597] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2186:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x820c7ead8 (0): No error: 0 00:04:06.471 [2024-07-14 21:03:17.759352] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:04:06.471 [2024-07-14 21:03:17.759410] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:04:06.471 passed 00:04:06.471 Test: test_nvme_tcp_ctrlr_delete_io_qpair ...passed 00:04:06.471 Test: test_nvme_tcp_poll_group_get_stats ...passed 00:04:06.471 Test: test_nvme_tcp_ctrlr_construct ...[2024-07-14 21:03:17.759467] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:06.471 [2024-07-14 21:03:17.759478] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2964:nvme_tcp_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:06.471 passed 00:04:06.471 Test: test_nvme_tcp_qpair_submit_request ...passed 00:04:06.471 00:04:06.471 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.471 suites 1 1 n/a 0 0 00:04:06.471 tests 27 27 27 0 0 00:04:06.471 asserts 624 624 624 0 n/a 00:04:06.471 00:04:06.471 Elapsed time = 0.062 seconds 00:04:06.471 [2024-07-14 21:03:17.759522] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2517:nvme_tcp_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:04:06.471 [2024-07-14 21:03:17.759542] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:04:06.471 [2024-07-14 21:03:17.759556] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2333:nvme_tcp_qpair_connect_sock: *ERROR*: Unhandled ADRFAM 254 00:04:06.471 [2024-07-14 21:03:17.759564] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:04:06.471 [2024-07-14 21:03:17.759580] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2384:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18cc5aa6b000 with addr=192.168.1.78, port=23 00:04:06.471 [2024-07-14 21:03:17.759589] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:04:06.471 [2024-07-14 21:03:17.759609] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c: 849:nvme_tcp_build_sgl_request: *ERROR*: Failed to construct tcp_req=0x18cc5aa39180, and the iovcnt=1, remaining_size=1024 00:04:06.471 [2024-07-14 21:03:17.759619] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_tcp.c:1035:nvme_tcp_qpair_submit_request: *ERROR*: nvme_tcp_req_init() failed 00:04:06.471 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@100 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_transport.c/nvme_transport_ut 00:04:06.471 00:04:06.471 00:04:06.471 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.471 http://cunit.sourceforge.net/ 00:04:06.471 00:04:06.471 00:04:06.471 Suite: nvme_transport 00:04:06.471 Test: test_nvme_get_transport ...passed 00:04:06.471 Test: test_nvme_transport_poll_group_connect_qpair ...passed 00:04:06.471 Test: test_nvme_transport_poll_group_disconnect_qpair ...passed 00:04:06.471 Test: test_nvme_transport_poll_group_add_remove ...passed 00:04:06.471 Test: test_ctrlr_get_memory_domains ...passed 00:04:06.471 00:04:06.471 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.471 suites 1 1 n/a 0 0 00:04:06.471 tests 5 5 5 0 0 00:04:06.471 asserts 28 28 28 0 n/a 00:04:06.471 00:04:06.471 Elapsed time = 0.000 seconds 00:04:06.471 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@101 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_io_msg.c/nvme_io_msg_ut 00:04:06.471 00:04:06.471 00:04:06.471 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.471 http://cunit.sourceforge.net/ 00:04:06.471 00:04:06.471 00:04:06.471 Suite: nvme_io_msg 00:04:06.471 Test: test_nvme_io_msg_send ...passed 00:04:06.471 Test: test_nvme_io_msg_process ...passed 00:04:06.471 Test: test_nvme_io_msg_ctrlr_register_unregister ...passed 00:04:06.471 00:04:06.471 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.471 suites 1 1 n/a 0 0 00:04:06.471 tests 3 3 3 0 0 00:04:06.471 asserts 56 56 56 0 n/a 00:04:06.471 00:04:06.471 Elapsed time = 0.000 seconds 00:04:06.471 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@102 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_pcie_common.c/nvme_pcie_common_ut 00:04:06.471 00:04:06.471 00:04:06.471 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.471 http://cunit.sourceforge.net/ 00:04:06.471 00:04:06.471 00:04:06.471 Suite: nvme_pcie_common 00:04:06.471 Test: test_nvme_pcie_ctrlr_alloc_cmb ...[2024-07-14 21:03:17.782543] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 87:nvme_pcie_ctrlr_alloc_cmb: *ERROR*: Tried to allocate past valid CMB range! 00:04:06.471 passed 00:04:06.471 Test: test_nvme_pcie_qpair_construct_destroy ...passed 00:04:06.471 Test: test_nvme_pcie_ctrlr_cmd_create_delete_io_queue ...passed 00:04:06.471 Test: test_nvme_pcie_ctrlr_connect_qpair ...passed 00:04:06.471 Test: test_nvme_pcie_ctrlr_construct_admin_qpair ...[2024-07-14 21:03:17.782871] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 504:nvme_completion_create_cq_cb: *ERROR*: nvme_create_io_cq failed! 00:04:06.471 [2024-07-14 21:03:17.782895] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 457:nvme_completion_create_sq_cb: *ERROR*: nvme_create_io_sq failed, deleting cq! 00:04:06.471 [2024-07-14 21:03:17.782911] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c: 551:_nvme_pcie_ctrlr_create_io_qpair: *ERROR*: Failed to send request to create_io_cq 00:04:06.471 passed 00:04:06.471 Test: test_nvme_pcie_poll_group_get_stats ...passed 00:04:06.471 00:04:06.471 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.471 suites 1 1 n/a 0 0 00:04:06.471 tests 6 6 6 0 0 00:04:06.471 asserts 148 148 148 0 n/a 00:04:06.471 00:04:06.471 Elapsed time = 0.000 seconds 00:04:06.471 [2024-07-14 21:03:17.783050] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:06.471 [2024-07-14 21:03:17.783065] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_pcie_common.c:1797:nvme_pcie_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:06.471 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@103 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_fabric.c/nvme_fabric_ut 00:04:06.471 00:04:06.471 00:04:06.471 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.471 http://cunit.sourceforge.net/ 00:04:06.471 00:04:06.471 00:04:06.471 Suite: nvme_fabric 00:04:06.471 Test: test_nvme_fabric_prop_set_cmd ...passed 00:04:06.471 Test: test_nvme_fabric_prop_get_cmd ...passed 00:04:06.471 Test: test_nvme_fabric_get_discovery_log_page ...passed 00:04:06.471 Test: test_nvme_fabric_discover_probe ...passed 00:04:06.471 Test: test_nvme_fabric_qpair_connect ...[2024-07-14 21:03:17.789180] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_fabric.c: 607:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -85, trtype:(null) adrfam:(null) traddr: trsvcid: subnqn:nqn.2016-06.io.spdk:subsystem1 00:04:06.471 passed 00:04:06.471 00:04:06.471 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.471 suites 1 1 n/a 0 0 00:04:06.471 tests 5 5 5 0 0 00:04:06.471 asserts 60 60 60 0 n/a 00:04:06.471 00:04:06.471 Elapsed time = 0.000 seconds 00:04:06.471 21:03:17 unittest.unittest_nvme -- unit/unittest.sh@104 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_opal.c/nvme_opal_ut 00:04:06.471 00:04:06.471 00:04:06.471 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.471 http://cunit.sourceforge.net/ 00:04:06.471 00:04:06.471 00:04:06.471 Suite: nvme_opal 00:04:06.471 Test: test_opal_nvme_security_recv_send_done ...passed 00:04:06.471 Test: test_opal_add_short_atom_header ...passed 00:04:06.471 00:04:06.471 [2024-07-14 21:03:17.794053] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_opal.c: 171:opal_add_token_bytestring: *ERROR*: Error adding bytestring: end of buffer. 00:04:06.471 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.471 suites 1 1 n/a 0 0 00:04:06.471 tests 2 2 2 0 0 00:04:06.471 asserts 22 22 22 0 n/a 00:04:06.471 00:04:06.471 Elapsed time = 0.000 seconds 00:04:06.471 00:04:06.471 real 0m0.428s 00:04:06.471 user 0m0.055s 00:04:06.471 sys 0m0.174s 00:04:06.471 21:03:17 unittest.unittest_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:06.471 21:03:17 unittest.unittest_nvme -- common/autotest_common.sh@10 -- # set +x 00:04:06.471 ************************************ 00:04:06.471 END TEST unittest_nvme 00:04:06.471 ************************************ 00:04:06.471 21:03:17 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:06.471 21:03:17 unittest -- unit/unittest.sh@249 -- # run_test unittest_log /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:04:06.471 21:03:17 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:06.471 21:03:17 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:06.471 21:03:17 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:06.471 ************************************ 00:04:06.471 START TEST unittest_log 00:04:06.471 ************************************ 00:04:06.471 21:03:17 unittest.unittest_log -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/log/log.c/log_ut 00:04:06.471 00:04:06.471 00:04:06.471 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.471 http://cunit.sourceforge.net/ 00:04:06.471 00:04:06.471 00:04:06.471 Suite: log 00:04:06.471 Test: log_test ...[2024-07-14 21:03:17.838232] log_ut.c: 56:log_test: *WARNING*: log warning unit test 00:04:06.471 [2024-07-14 21:03:17.838480] log_ut.c: 57:log_test: *DEBUG*: log test 00:04:06.471 passed 00:04:06.471 Test: deprecation ...log dump test: 00:04:06.471 00000000 6c 6f 67 20 64 75 6d 70 log dump 00:04:06.472 spdk dump test: 00:04:06.472 00000000 73 70 64 6b 20 64 75 6d 70 spdk dump 00:04:06.472 spdk dump test: 00:04:06.472 00000000 73 70 64 6b 20 64 75 6d 70 20 31 36 20 6d 6f 72 spdk dump 16 mor 00:04:06.472 00000010 65 20 63 68 61 72 73 e chars 00:04:07.406 passed 00:04:07.406 00:04:07.407 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.407 suites 1 1 n/a 0 0 00:04:07.407 tests 2 2 2 0 0 00:04:07.407 asserts 73 73 73 0 n/a 00:04:07.407 00:04:07.407 Elapsed time = 0.000 seconds 00:04:07.407 00:04:07.407 real 0m1.014s 00:04:07.407 user 0m0.000s 00:04:07.407 sys 0m0.008s 00:04:07.407 21:03:18 unittest.unittest_log -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.407 21:03:18 unittest.unittest_log -- common/autotest_common.sh@10 -- # set +x 00:04:07.407 ************************************ 00:04:07.407 END TEST unittest_log 00:04:07.407 ************************************ 00:04:07.407 21:03:18 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:07.407 21:03:18 unittest -- unit/unittest.sh@250 -- # run_test unittest_lvol /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:04:07.407 21:03:18 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.407 21:03:18 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.407 21:03:18 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:07.407 ************************************ 00:04:07.407 START TEST unittest_lvol 00:04:07.407 ************************************ 00:04:07.407 21:03:18 unittest.unittest_lvol -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/lvol/lvol.c/lvol_ut 00:04:07.407 00:04:07.407 00:04:07.407 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.407 http://cunit.sourceforge.net/ 00:04:07.407 00:04:07.407 00:04:07.407 Suite: lvol 00:04:07.407 Test: lvs_init_unload_success ...[2024-07-14 21:03:18.903361] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 892:spdk_lvs_unload: *ERROR*: Lvols still open on lvol store 00:04:07.407 passed 00:04:07.407 Test: lvs_init_destroy_success ...passed 00:04:07.407 Test: lvs_init_opts_success ...passed 00:04:07.407 Test: lvs_unload_lvs_is_null_fail ...passed 00:04:07.407 Test: lvs_names ...[2024-07-14 21:03:18.903664] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 962:spdk_lvs_destroy: *ERROR*: Lvols still open on lvol store 00:04:07.407 [2024-07-14 21:03:18.903712] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 882:spdk_lvs_unload: *ERROR*: Lvol store is NULL 00:04:07.407 [2024-07-14 21:03:18.903734] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 726:spdk_lvs_init: *ERROR*: No name specified. 00:04:07.407 [2024-07-14 21:03:18.903751] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 720:spdk_lvs_init: *ERROR*: Name has no null terminator. 00:04:07.407 passed 00:04:07.407 Test: lvol_create_destroy_success ...passed 00:04:07.407 Test: lvol_create_fail ...passed 00:04:07.407 Test: lvol_destroy_fail ...passed 00:04:07.407 Test: lvol_close ...passed 00:04:07.407 Test: lvol_resize ...passed 00:04:07.407 Test: lvol_set_read_only ...passed 00:04:07.407 Test: test_lvs_load ...[2024-07-14 21:03:18.903777] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 736:spdk_lvs_init: *ERROR*: lvolstore with name x already exists 00:04:07.407 [2024-07-14 21:03:18.903854] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 689:spdk_lvs_init: *ERROR*: Blobstore device does not exist 00:04:07.407 [2024-07-14 21:03:18.903876] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1190:spdk_lvol_create: *ERROR*: lvol store does not exist 00:04:07.407 [2024-07-14 21:03:18.903916] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1026:lvol_delete_blob_cb: *ERROR*: Could not remove blob on lvol gracefully - forced removal 00:04:07.407 [2024-07-14 21:03:18.903945] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1614:spdk_lvol_close: *ERROR*: lvol does not exist 00:04:07.407 [2024-07-14 21:03:18.903959] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 995:lvol_close_blob_cb: *ERROR*: Could not close blob on lvol 00:04:07.407 passed 00:04:07.407 Test: lvols_load ...passed 00:04:07.407 Test: lvol_open ...passed 00:04:07.407 Test: lvol_snapshot ...passed 00:04:07.407 Test: lvol_snapshot_fail ...[2024-07-14 21:03:18.904050] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 631:lvs_opts_copy: *ERROR*: opts_size should not be zero value 00:04:07.407 [2024-07-14 21:03:18.904066] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 441:lvs_load: *ERROR*: Invalid options 00:04:07.407 [2024-07-14 21:03:18.904099] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:04:07.407 [2024-07-14 21:03:18.904139] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 227:load_next_lvol: *ERROR*: Failed to fetch blobs list 00:04:07.407 passed 00:04:07.407 Test: lvol_clone ...passed 00:04:07.407 Test: lvol_clone_fail ...passed 00:04:07.407 Test: lvol_iter_clones ...passed 00:04:07.407 Test: lvol_refcnt ...passed 00:04:07.407 Test: lvol_names ...[2024-07-14 21:03:18.904240] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name snap already exists 00:04:07.407 [2024-07-14 21:03:18.904312] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone already exists 00:04:07.407 [2024-07-14 21:03:18.904370] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1572:spdk_lvol_destroy: *ERROR*: Cannot destroy lvol 7f04adc5-4224-11ef-aa83-81fbc7dfef58 because it is still open 00:04:07.407 [2024-07-14 21:03:18.904403] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:04:07.407 [2024-07-14 21:03:18.904423] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:04:07.407 passed 00:04:07.407 Test: lvol_create_thin_provisioned ...passed 00:04:07.407 Test: lvol_rename ...passed 00:04:07.407 Test: lvs_rename ...passed 00:04:07.407 Test: lvol_inflate ...[2024-07-14 21:03:18.904452] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1169:lvs_verify_lvol_name: *ERROR*: lvol with name tmp_name is being already created 00:04:07.407 [2024-07-14 21:03:18.904514] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:04:07.407 [2024-07-14 21:03:18.904567] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1524:spdk_lvol_rename: *ERROR*: Lvol lvol_new already exists in lvol store lvs 00:04:07.407 [2024-07-14 21:03:18.904609] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c: 769:lvs_rename_cb: *ERROR*: Lvol store rename operation failed 00:04:07.407 [2024-07-14 21:03:18.904640] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:04:07.407 passed 00:04:07.407 Test: lvol_decouple_parent ...passed 00:04:07.407 Test: lvol_get_xattr ...passed 00:04:07.407 Test: lvol_esnap_reload ...passed 00:04:07.407 Test: lvol_esnap_create_bad_args ...passed 00:04:07.407 Test: lvol_esnap_create_delete ...passed 00:04:07.407 Test: lvol_esnap_load_esnaps ...[2024-07-14 21:03:18.904671] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1658:lvol_inflate_cb: *ERROR*: Could not inflate lvol 00:04:07.407 [2024-07-14 21:03:18.904729] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1245:spdk_lvol_create_esnap_clone: *ERROR*: lvol store does not exist 00:04:07.407 [2024-07-14 21:03:18.904744] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1156:lvs_verify_lvol_name: *ERROR*: Name has no null terminator. 00:04:07.407 [2024-07-14 21:03:18.904761] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1260:spdk_lvol_create_esnap_clone: *ERROR*: Cannot create 'lvs/clone1': size 4198400 is not an integer multiple of cluster size 1048576 00:04:07.407 [2024-07-14 21:03:18.904781] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol already exists 00:04:07.407 [2024-07-14 21:03:18.904815] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name clone1 already exists 00:04:07.407 passed 00:04:07.407 Test: lvol_esnap_missing ...passed 00:04:07.407 Test: lvol_esnap_hotplug ... 00:04:07.407 lvol_esnap_hotplug scenario 0: PASS - one missing, happy path 00:04:07.407 lvol_esnap_hotplug scenario 1: PASS - one missing, cb registers degraded_set 00:04:07.407 lvol_esnap_hotplug scenario 2: PASS - one missing, cb retuns -ENOMEM 00:04:07.407 [2024-07-14 21:03:18.904867] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1833:lvs_esnap_bs_dev_create: *ERROR*: Blob 0x2a: no lvs context nor lvol context 00:04:07.407 [2024-07-14 21:03:18.904906] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:04:07.407 [2024-07-14 21:03:18.904921] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:1162:lvs_verify_lvol_name: *ERROR*: lvol with name lvol1 already exists 00:04:07.407 [2024-07-14 21:03:18.905015] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 7f04c6e4-4224-11ef-aa83-81fbc7dfef58: failed to create esnap bs_dev: error -12 00:04:07.407 lvol_esnap_hotplug scenario 3: PASS - two missing with same esnap, happy path 00:04:07.407 lvol_esnap_hotplug scenario 4: PASS - two missing with same esnap, first -ENOMEM 00:04:07.407 lvol_esnap_hotplug scenario 5: PASS - two missing with same esnap, second -ENOMEM 00:04:07.407 lvol_esnap_hotplug scenario 6: PASS - two missing with different esnaps, happy path 00:04:07.407 lvol_esnap_hotplug scenario 7: PASS - two missing with different esnaps, first still missing 00:04:07.407 lvol_esnap_hotplug scenario 8: PASS - three missing with same esnap, happy path 00:04:07.407 lvol_esnap_hotplug scenario 9: PASS - three missing with same esnap, first still missing 00:04:07.407 lvol_esnap_hotplug scenario 10: PASS - three missing with same esnap, first two still missing 00:04:07.407 lvol_esnap_hotplug scenario 11: PASS - three missing with same esnap, middle still missing 00:04:07.407 lvol_esnap_hotplug scenario 12: PASS - three missing with same esnap, last still missing 00:04:07.407 passed 00:04:07.407 Test: lvol_get_by ...[2024-07-14 21:03:18.905081] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 7f04c953-4224-11ef-aa83-81fbc7dfef58: failed to create esnap bs_dev: error -12 00:04:07.407 [2024-07-14 21:03:18.905117] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2063:lvs_esnap_degraded_hotplug: *ERROR*: lvol 7f04caea-4224-11ef-aa83-81fbc7dfef58: failed to create esnap bs_dev: error -12 00:04:07.407 passed 00:04:07.407 Test: lvol_shallow_copy ...passed 00:04:07.407 Test: lvol_set_parent ...passed 00:04:07.407 Test: lvol_set_external_parent ...passed 00:04:07.407 00:04:07.407 [2024-07-14 21:03:18.905366] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2274:spdk_lvol_shallow_copy: *ERROR*: lvol must not be NULL 00:04:07.407 [2024-07-14 21:03:18.905382] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2281:spdk_lvol_shallow_copy: *ERROR*: lvol 7f04d4a2-4224-11ef-aa83-81fbc7dfef58 shallow copy, ext_dev must not be NULL 00:04:07.407 [2024-07-14 21:03:18.905438] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2338:spdk_lvol_set_parent: *ERROR*: lvol must not be NULL 00:04:07.407 [2024-07-14 21:03:18.905453] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2344:spdk_lvol_set_parent: *ERROR*: snapshot must not be NULL 00:04:07.407 [2024-07-14 21:03:18.905483] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2393:spdk_lvol_set_external_parent: *ERROR*: lvol must not be NULL 00:04:07.407 [2024-07-14 21:03:18.905498] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2399:spdk_lvol_set_external_parent: *ERROR*: snapshot must not be NULL 00:04:07.407 [2024-07-14 21:03:18.905514] /home/vagrant/spdk_repo/spdk/lib/lvol/lvol.c:2406:spdk_lvol_set_external_parent: *ERROR*: lvol lvol and esnap have the same UUID 00:04:07.407 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.407 suites 1 1 n/a 0 0 00:04:07.407 tests 37 37 37 0 0 00:04:07.407 asserts 1505 1505 1505 0 n/a 00:04:07.407 00:04:07.407 Elapsed time = 0.000 seconds 00:04:07.407 00:04:07.407 real 0m0.011s 00:04:07.407 user 0m0.011s 00:04:07.407 sys 0m0.007s 00:04:07.407 21:03:18 unittest.unittest_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.407 ************************************ 00:04:07.407 END TEST unittest_lvol 00:04:07.407 ************************************ 00:04:07.407 21:03:18 unittest.unittest_lvol -- common/autotest_common.sh@10 -- # set +x 00:04:07.407 21:03:18 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:07.407 21:03:18 unittest -- unit/unittest.sh@251 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:07.407 21:03:18 unittest -- unit/unittest.sh@252 -- # run_test unittest_nvme_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:04:07.407 21:03:18 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.407 21:03:18 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.408 21:03:18 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:07.408 ************************************ 00:04:07.408 START TEST unittest_nvme_rdma 00:04:07.408 ************************************ 00:04:07.408 21:03:18 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvme/nvme_rdma.c/nvme_rdma_ut 00:04:07.669 00:04:07.669 00:04:07.669 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.669 http://cunit.sourceforge.net/ 00:04:07.669 00:04:07.669 00:04:07.669 Suite: nvme_rdma 00:04:07.669 Test: test_nvme_rdma_build_sgl_request ...[2024-07-14 21:03:18.956843] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -34 00:04:07.669 passed 00:04:07.669 Test: test_nvme_rdma_build_sgl_inline_request ...passed 00:04:07.669 Test: test_nvme_rdma_build_contig_request ...[2024-07-14 21:03:18.957020] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1553:nvme_rdma_build_sgl_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:04:07.669 [2024-07-14 21:03:18.957043] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1609:nvme_rdma_build_sgl_request: *ERROR*: Size of SGL descriptors (64) exceeds ICD (60) 00:04:07.669 [2024-07-14 21:03:18.957066] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1490:nvme_rdma_build_contig_request: *ERROR*: SGL length 16777216 exceeds max keyed SGL block size 16777215 00:04:07.669 passed 00:04:07.669 Test: test_nvme_rdma_build_contig_inline_request ...passed 00:04:07.669 Test: test_nvme_rdma_create_reqs ...passed 00:04:07.669 Test: test_nvme_rdma_create_rsps ...passed 00:04:07.669 Test: test_nvme_rdma_ctrlr_create_qpair ...passed 00:04:07.669 Test: test_nvme_rdma_poller_create ...passed 00:04:07.669 Test: test_nvme_rdma_qpair_process_cm_event ...passed 00:04:07.669 Test: test_nvme_rdma_ctrlr_construct ...[2024-07-14 21:03:18.957093] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 931:nvme_rdma_create_reqs: *ERROR*: Failed to allocate rdma_reqs 00:04:07.669 [2024-07-14 21:03:18.957131] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 849:nvme_rdma_create_rsps: *ERROR*: Failed to allocate rsp_sgls 00:04:07.669 [2024-07-14 21:03:18.957163] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1747:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 0. Minimum queue size is 2. 00:04:07.669 [2024-07-14 21:03:18.957175] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1747:nvme_rdma_ctrlr_create_qpair: *ERROR*: Failed to create qpair with size 1. Minimum queue size is 2. 00:04:07.669 [2024-07-14 21:03:18.957206] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 450:nvme_rdma_qpair_process_cm_event: *ERROR*: Unexpected Acceptor Event [255] 00:04:07.669 passed 00:04:07.669 Test: test_nvme_rdma_req_put_and_get ...passed 00:04:07.669 Test: test_nvme_rdma_req_init ...passed 00:04:07.669 Test: test_nvme_rdma_validate_cm_event ...passed 00:04:07.669 Test: test_nvme_rdma_qpair_init ...passed 00:04:07.669 Test: test_nvme_rdma_qpair_submit_request ...passed 00:04:07.669 Test: test_rdma_ctrlr_get_memory_domains ...passed 00:04:07.669 Test: test_rdma_get_memory_translation ...passed 00:04:07.669 Test: test_get_rdma_qpair_from_wc ...passed 00:04:07.669 Test: test_nvme_rdma_ctrlr_get_max_sges ...passed 00:04:07.669 Test: test_nvme_rdma_poll_group_get_stats ...[2024-07-14 21:03:18.957272] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 544:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ADDR_RESOLVED but received RDMA_CM_EVENT_CONNECT_RESPONSE (5) from CM event channel (status = 0) 00:04:07.669 [2024-07-14 21:03:18.957285] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 544:nvme_rdma_validate_cm_event: *ERROR*: Expected RDMA_CM_EVENT_ESTABLISHED but received RDMA_CM_EVENT_REJECTED (8) from CM event channel (status = 10) 00:04:07.669 [2024-07-14 21:03:18.957310] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1368:nvme_rdma_get_memory_translation: *ERROR*: DMA memory translation failed, rc -1, iov count 0 00:04:07.669 [2024-07-14 21:03:18.957325] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:1379:nvme_rdma_get_memory_translation: *ERROR*: RDMA memory translation failed, rc -1 00:04:07.669 [2024-07-14 21:03:18.957346] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:07.669 [2024-07-14 21:03:18.957357] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:3204:nvme_rdma_poll_group_get_stats: *ERROR*: Invalid stats or group pointer 00:04:07.669 passed 00:04:07.669 Test: test_nvme_rdma_qpair_set_poller ...[2024-07-14 21:03:18.957381] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:04:07.669 [2024-07-14 21:03:18.957393] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0xfeedbeef 00:04:07.669 [2024-07-14 21:03:18.957404] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x8202b6078 on poll group 0x202e9b472000 00:04:07.669 [2024-07-14 21:03:18.957415] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2916:nvme_rdma_poller_create: *ERROR*: Unable to create CQ, errno 0. 00:04:07.669 [2024-07-14 21:03:18.957425] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c:2962:nvme_rdma_poll_group_get_poller: *ERROR*: Failed to create a poller for device 0x0 00:04:07.669 [2024-07-14 21:03:18.957435] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 647:nvme_rdma_qpair_set_poller: *ERROR*: Unable to find a cq for qpair 0x8202b6078 on poll group 0x202e9b472000 00:04:07.669 passed 00:04:07.669 00:04:07.669 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.669 suites 1 1 n/a 0 0 00:04:07.669 tests 21 21 21 0 0 00:04:07.669 asserts 397 397 397 0 n/a 00:04:07.669 00:04:07.669 Elapsed time = 0.000 seconds 00:04:07.669 [2024-07-14 21:03:18.957490] /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_rdma.c: 625:nvme_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:04:07.669 00:04:07.669 real 0m0.006s 00:04:07.669 user 0m0.005s 00:04:07.669 sys 0m0.004s 00:04:07.669 21:03:18 unittest.unittest_nvme_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.669 21:03:18 unittest.unittest_nvme_rdma -- common/autotest_common.sh@10 -- # set +x 00:04:07.669 ************************************ 00:04:07.669 END TEST unittest_nvme_rdma 00:04:07.669 ************************************ 00:04:07.669 21:03:18 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:07.669 21:03:18 unittest -- unit/unittest.sh@253 -- # run_test unittest_nvmf_transport /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:04:07.669 21:03:18 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.669 21:03:18 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.669 21:03:18 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:07.669 ************************************ 00:04:07.669 START TEST unittest_nvmf_transport 00:04:07.669 ************************************ 00:04:07.669 21:03:18 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/transport.c/transport_ut 00:04:07.669 00:04:07.669 00:04:07.669 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.669 http://cunit.sourceforge.net/ 00:04:07.669 00:04:07.669 00:04:07.669 Suite: nvmf 00:04:07.669 Test: test_spdk_nvmf_transport_create ...[2024-07-14 21:03:19.003194] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 251:nvmf_transport_create: *ERROR*: Transport type 'new_ops' unavailable. 00:04:07.669 [2024-07-14 21:03:19.003372] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 271:nvmf_transport_create: *ERROR*: io_unit_size cannot be 0 00:04:07.669 passed 00:04:07.669 Test: test_nvmf_transport_poll_group_create ...passed 00:04:07.669 Test: test_spdk_nvmf_transport_opts_init ...passed 00:04:07.669 Test: test_spdk_nvmf_transport_listen_ext ...passed 00:04:07.669 00:04:07.669 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.669 suites 1 1 n/a 0 0 00:04:07.669 tests 4 4 4 0 0 00:04:07.669 asserts 49 49 49 0 n/a 00:04:07.669 00:04:07.669 Elapsed time = 0.000 seconds 00:04:07.669 [2024-07-14 21:03:19.003390] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 276:nvmf_transport_create: *ERROR*: io_unit_size 131072 is larger than iobuf pool large buffer size 65536 00:04:07.669 [2024-07-14 21:03:19.003419] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 259:nvmf_transport_create: *ERROR*: max_io_size 4096 must be a power of 2 and be greater than or equal 8KB 00:04:07.669 [2024-07-14 21:03:19.003443] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 792:spdk_nvmf_transport_opts_init: *ERROR*: Transport type invalid_ops unavailable. 00:04:07.669 [2024-07-14 21:03:19.003453] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 797:spdk_nvmf_transport_opts_init: *ERROR*: opts should not be NULL 00:04:07.669 [2024-07-14 21:03:19.003461] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 802:spdk_nvmf_transport_opts_init: *ERROR*: opts_size inside opts should not be zero value 00:04:07.669 00:04:07.669 real 0m0.004s 00:04:07.669 user 0m0.000s 00:04:07.669 sys 0m0.006s 00:04:07.669 21:03:19 unittest.unittest_nvmf_transport -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.669 21:03:19 unittest.unittest_nvmf_transport -- common/autotest_common.sh@10 -- # set +x 00:04:07.669 ************************************ 00:04:07.669 END TEST unittest_nvmf_transport 00:04:07.669 ************************************ 00:04:07.669 21:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:07.669 21:03:19 unittest -- unit/unittest.sh@254 -- # run_test unittest_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:04:07.669 21:03:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.669 21:03:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.669 21:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:07.669 ************************************ 00:04:07.669 START TEST unittest_rdma 00:04:07.669 ************************************ 00:04:07.669 21:03:19 unittest.unittest_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/rdma/common.c/common_ut 00:04:07.669 00:04:07.669 00:04:07.669 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.669 http://cunit.sourceforge.net/ 00:04:07.669 00:04:07.669 00:04:07.669 Suite: rdma_common 00:04:07.669 Test: test_spdk_rdma_pd ...[2024-07-14 21:03:19.049866] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:04:07.669 [2024-07-14 21:03:19.050189] /home/vagrant/spdk_repo/spdk/lib/rdma_utils/rdma_utils.c: 398:spdk_rdma_utils_get_pd: *ERROR*: Failed to get PD 00:04:07.669 passed 00:04:07.669 00:04:07.669 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.669 suites 1 1 n/a 0 0 00:04:07.669 tests 1 1 1 0 0 00:04:07.669 asserts 31 31 31 0 n/a 00:04:07.669 00:04:07.669 Elapsed time = 0.000 seconds 00:04:07.669 00:04:07.669 real 0m0.006s 00:04:07.669 user 0m0.006s 00:04:07.669 sys 0m0.000s 00:04:07.669 21:03:19 unittest.unittest_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.669 21:03:19 unittest.unittest_rdma -- common/autotest_common.sh@10 -- # set +x 00:04:07.669 ************************************ 00:04:07.669 END TEST unittest_rdma 00:04:07.669 ************************************ 00:04:07.669 21:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:07.669 21:03:19 unittest -- unit/unittest.sh@257 -- # grep -q '#define SPDK_CONFIG_NVME_CUSE 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:07.669 21:03:19 unittest -- unit/unittest.sh@261 -- # run_test unittest_nvmf unittest_nvmf 00:04:07.669 21:03:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.669 21:03:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.669 21:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:07.669 ************************************ 00:04:07.669 START TEST unittest_nvmf 00:04:07.669 ************************************ 00:04:07.670 21:03:19 unittest.unittest_nvmf -- common/autotest_common.sh@1123 -- # unittest_nvmf 00:04:07.670 21:03:19 unittest.unittest_nvmf -- unit/unittest.sh@108 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr.c/ctrlr_ut 00:04:07.670 00:04:07.670 00:04:07.670 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.670 http://cunit.sourceforge.net/ 00:04:07.670 00:04:07.670 00:04:07.670 Suite: nvmf 00:04:07.670 Test: test_get_log_page ...passed 00:04:07.670 Test: test_process_fabrics_cmd ...passed 00:04:07.670 Test: test_connect ...[2024-07-14 21:03:19.105425] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2635:nvmf_ctrlr_get_log_page: *ERROR*: Invalid log page offset 0x2 00:04:07.670 [2024-07-14 21:03:19.105616] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x0 on qid 0 before CONNECT 00:04:07.670 [2024-07-14 21:03:19.105700] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1012:nvmf_ctrlr_cmd_connect: *ERROR*: Connect command data length 0x3ff too small 00:04:07.670 [2024-07-14 21:03:19.105713] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 875:_nvmf_ctrlr_connect: *ERROR*: Connect command unsupported RECFMT 1234 00:04:07.670 [2024-07-14 21:03:19.105724] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1051:nvmf_ctrlr_cmd_connect: *ERROR*: Connect HOSTNQN is not null terminated 00:04:07.670 [2024-07-14 21:03:19.105734] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:subsystem1' does not allow host 'nqn.2016-06.io.spdk:host1' 00:04:07.670 [2024-07-14 21:03:19.105744] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 886:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE = 0 00:04:07.670 [2024-07-14 21:03:19.105754] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 894:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE for admin queue 32 (min 1, max 31) 00:04:07.670 [2024-07-14 21:03:19.105768] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 900:_nvmf_ctrlr_connect: *ERROR*: Invalid SQSIZE 64 (min 1, max 63) 00:04:07.670 [2024-07-14 21:03:19.105778] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 926:_nvmf_ctrlr_connect: *ERROR*: The NVMf target only supports dynamic mode (CNTLID = 0x1234). 00:04:07.670 [2024-07-14 21:03:19.105791] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0xffff 00:04:07.670 [2024-07-14 21:03:19.105813] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 676:nvmf_ctrlr_add_io_qpair: *ERROR*: I/O connect not allowed on discovery controller 00:04:07.670 passed 00:04:07.670 Test: test_get_ns_id_desc_list ...passed 00:04:07.670 Test: test_identify_ns ...[2024-07-14 21:03:19.105831] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 682:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect before ctrlr was enabled 00:04:07.670 [2024-07-14 21:03:19.105842] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 689:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOSQES 3 00:04:07.670 [2024-07-14 21:03:19.105852] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 696:nvmf_ctrlr_add_io_qpair: *ERROR*: Got I/O connect with invalid IOCQES 3 00:04:07.670 [2024-07-14 21:03:19.105863] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 720:nvmf_ctrlr_add_io_qpair: *ERROR*: Requested QID 3 but Max QID is 2 00:04:07.670 [2024-07-14 21:03:19.105881] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 295:nvmf_ctrlr_add_qpair: *ERROR*: Got I/O connect with duplicate QID 1 (cntlid:0) 00:04:07.670 [2024-07-14 21:03:19.105897] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 4, group 0x0) 00:04:07.670 [2024-07-14 21:03:19.105907] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c: 806:_nvmf_ctrlr_add_io_qpair: *ERROR*: Inactive admin qpair (state 0, group 0x0) 00:04:07.670 [2024-07-14 21:03:19.105958] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:04:07.670 passed 00:04:07.670 Test: test_identify_ns_iocs_specific ...passed 00:04:07.670 Test: test_reservation_write_exclusive ...passed 00:04:07.670 Test: test_reservation_exclusive_access ...passed 00:04:07.670 Test: test_reservation_write_exclusive_regs_only_and_all_regs ...passed 00:04:07.670 Test: test_reservation_exclusive_access_regs_only_and_all_regs ...passed 00:04:07.670 Test: test_reservation_notification_log_page ...passed 00:04:07.670 Test: test_get_dif_ctx ...passed 00:04:07.670 Test: test_set_get_features ...[2024-07-14 21:03:19.106007] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4 00:04:07.670 [2024-07-14 21:03:19.106030] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:04:07.670 [2024-07-14 21:03:19.106055] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:04:07.670 [2024-07-14 21:03:19.106102] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:2729:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:04:07.670 [2024-07-14 21:03:19.106193] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:04:07.670 passed 00:04:07.670 Test: test_identify_ctrlr ...passed 00:04:07.670 Test: test_identify_ctrlr_iocs_specific ...passed 00:04:07.670 Test: test_custom_admin_cmd ...passed 00:04:07.670 Test: test_fused_compare_and_write ...passed 00:04:07.670 Test: test_multi_async_event_reqs ...passed 00:04:07.670 Test: test_get_ana_log_page_one_ns_per_anagrp ...passed 00:04:07.670 Test: test_get_ana_log_page_multi_ns_per_anagrp ...passed 00:04:07.670 Test: test_multi_async_events ...passed 00:04:07.670 Test: test_rae ...passed 00:04:07.670 Test: test_nvmf_ctrlr_create_destruct ...passed 00:04:07.670 Test: test_nvmf_ctrlr_use_zcopy ...passed 00:04:07.670 Test: test_spdk_nvmf_request_zcopy_start ...[2024-07-14 21:03:19.106208] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1648:temp_threshold_opts_valid: *ERROR*: Invalid TMPSEL 9 00:04:07.670 [2024-07-14 21:03:19.106218] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1659:temp_threshold_opts_valid: *ERROR*: Invalid THSEL 3 00:04:07.670 [2024-07-14 21:03:19.106227] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1735:nvmf_ctrlr_set_features_error_recovery: *ERROR*: Host set unsupported DULBE bit 00:04:07.670 [2024-07-14 21:03:19.106310] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4238:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong sequence of fused operations 00:04:07.670 [2024-07-14 21:03:19.106320] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4227:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:04:07.670 [2024-07-14 21:03:19.106330] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4245:nvmf_ctrlr_process_io_fused_cmd: *ERROR*: Wrong op code of fused operations 00:04:07.670 passed 00:04:07.670 Test: test_zcopy_read ...passed 00:04:07.670 Test: test_zcopy_write ...passed 00:04:07.670 Test: test_nvmf_property_set ...passed 00:04:07.670 Test: test_nvmf_ctrlr_get_features_host_behavior_support ...passed 00:04:07.670 Test: test_nvmf_ctrlr_set_features_host_behavior_support ...passed 00:04:07.670 Test: test_nvmf_ctrlr_ns_attachment ...passed 00:04:07.670 Test: test_nvmf_check_qpair_active ...passed 00:04:07.670 00:04:07.670 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.670 suites 1 1 n/a 0 0 00:04:07.670 tests 32 32 32 0 0 00:04:07.670 asserts 977 977 977 0 n/a 00:04:07.670 00:04:07.670 Elapsed time = 0.000 seconds[2024-07-14 21:03:19.106404] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 before CONNECT 00:04:07.670 [2024-07-14 21:03:19.106417] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 1 in state 4 00:04:07.670 [2024-07-14 21:03:19.106447] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:04:07.670 [2024-07-14 21:03:19.106464] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1946:nvmf_ctrlr_get_features_host_behavior_support: *ERROR*: invalid data buffer for Host Behavior Support 00:04:07.670 [2024-07-14 21:03:19.106476] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1969:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iovcnt: 0 00:04:07.670 [2024-07-14 21:03:19.106490] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1975:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid iov_len: 0 00:04:07.670 [2024-07-14 21:03:19.106499] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:1987:nvmf_ctrlr_set_features_host_behavior_support: *ERROR*: Host Behavior Support invalid acre: 0x02 00:04:07.670 [2024-07-14 21:03:19.106523] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4731:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before CONNECT 00:04:07.670 [2024-07-14 21:03:19.106543] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4745:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 before authentication 00:04:07.670 [2024-07-14 21:03:19.106553] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 0 00:04:07.670 [2024-07-14 21:03:19.106562] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 4 00:04:07.670 [2024-07-14 21:03:19.106571] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr.c:4757:nvmf_check_qpair_active: *ERROR*: Received command 0x2 on qid 0 in state 5 00:04:07.670 00:04:07.670 21:03:19 unittest.unittest_nvmf -- unit/unittest.sh@109 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_bdev.c/ctrlr_bdev_ut 00:04:07.670 00:04:07.670 00:04:07.670 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.670 http://cunit.sourceforge.net/ 00:04:07.670 00:04:07.670 00:04:07.670 Suite: nvmf 00:04:07.670 Test: test_get_rw_params ...passed 00:04:07.670 Test: test_get_rw_ext_params ...passed 00:04:07.670 Test: test_lba_in_range ...passed 00:04:07.670 Test: test_get_dif_ctx ...passed 00:04:07.670 Test: test_nvmf_bdev_ctrlr_identify_ns ...passed 00:04:07.670 Test: test_spdk_nvmf_bdev_ctrlr_compare_and_write_cmd ...passed 00:04:07.670 Test: test_nvmf_bdev_ctrlr_zcopy_start ...passed 00:04:07.670 Test: test_nvmf_bdev_ctrlr_cmd ...[2024-07-14 21:03:19.113843] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 447:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Fused command start lba / num blocks mismatch 00:04:07.670 [2024-07-14 21:03:19.114050] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 455:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: end of media 00:04:07.670 [2024-07-14 21:03:19.114077] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 463:nvmf_bdev_ctrlr_compare_and_write_cmd: *ERROR*: Write NLB 2 * block size 512 > SGL length 1023 00:04:07.670 [2024-07-14 21:03:19.114098] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 965:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: end of media 00:04:07.670 [2024-07-14 21:03:19.114117] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 973:nvmf_bdev_ctrlr_zcopy_start: *ERROR*: Read NLB 2 * block size 512 > SGL length 1023 00:04:07.670 [2024-07-14 21:03:19.114135] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 401:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: end of media 00:04:07.670 [2024-07-14 21:03:19.114149] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 409:nvmf_bdev_ctrlr_compare_cmd: *ERROR*: Compare NLB 3 * block size 512 > SGL length 512 00:04:07.670 passed 00:04:07.670 Test: test_nvmf_bdev_ctrlr_read_write_cmd ...passed 00:04:07.670 Test: test_nvmf_bdev_ctrlr_nvme_passthru ...passed[2024-07-14 21:03:19.114167] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 500:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: invalid write zeroes size, should not exceed 1Kib 00:04:07.670 [2024-07-14 21:03:19.114182] /home/vagrant/spdk_repo/spdk/lib/nvmf/ctrlr_bdev.c: 507:nvmf_bdev_ctrlr_write_zeroes_cmd: *ERROR*: end of media 00:04:07.670 00:04:07.670 00:04:07.670 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.670 suites 1 1 n/a 0 0 00:04:07.670 tests 10 10 10 0 0 00:04:07.670 asserts 159 159 159 0 n/a 00:04:07.670 00:04:07.670 Elapsed time = 0.000 seconds 00:04:07.670 21:03:19 unittest.unittest_nvmf -- unit/unittest.sh@110 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/ctrlr_discovery.c/ctrlr_discovery_ut 00:04:07.670 00:04:07.670 00:04:07.670 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.670 http://cunit.sourceforge.net/ 00:04:07.670 00:04:07.670 00:04:07.670 Suite: nvmf 00:04:07.670 Test: test_discovery_log ...passed 00:04:07.670 Test: test_discovery_log_with_filters ...passed 00:04:07.670 00:04:07.670 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.671 suites 1 1 n/a 0 0 00:04:07.671 tests 2 2 2 0 0 00:04:07.671 asserts 238 238 238 0 n/a 00:04:07.671 00:04:07.671 Elapsed time = 0.000 seconds 00:04:07.671 21:03:19 unittest.unittest_nvmf -- unit/unittest.sh@111 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/subsystem.c/subsystem_ut 00:04:07.671 00:04:07.671 00:04:07.671 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.671 http://cunit.sourceforge.net/ 00:04:07.671 00:04:07.671 00:04:07.671 Suite: nvmf 00:04:07.671 Test: nvmf_test_create_subsystem ...[2024-07-14 21:03:19.125223] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 126:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:". NQN must contain user specified name with a ':' as a prefix. 00:04:07.671 [2024-07-14 21:03:19.125487] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:' is invalid 00:04:07.671 [2024-07-14 21:03:19.125520] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 134:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub". At least one Label is too long. 00:04:07.671 [2024-07-14 21:03:19.125537] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.abcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyzabcdefghijklmnopqrstuvwxyz:sub' is invalid 00:04:07.671 [2024-07-14 21:03:19.125553] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.3spdk:sub". Label names must start with a letter. 00:04:07.671 [2024-07-14 21:03:19.125568] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.3spdk:sub' is invalid 00:04:07.671 [2024-07-14 21:03:19.125583] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.-spdk:subsystem1". Label names must start with a letter. 00:04:07.671 [2024-07-14 21:03:19.125597] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.-spdk:subsystem1' is invalid 00:04:07.671 [2024-07-14 21:03:19.125612] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 184:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk-:subsystem1". Label names must end with an alphanumeric symbol. 00:04:07.671 [2024-07-14 21:03:19.125626] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk-:subsystem1' is invalid 00:04:07.671 [2024-07-14 21:03:19.125641] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io..spdk:subsystem1". Label names must start with a letter. 00:04:07.671 [2024-07-14 21:03:19.125655] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io..spdk:subsystem1' is invalid 00:04:07.671 [2024-07-14 21:03:19.125679] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 79:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa": length 224 > max 223 00:04:07.671 [2024-07-14 21:03:19.125696] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:aaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaaa' is invalid 00:04:07.671 passed 00:04:07.671 Test: test_spdk_nvmf_subsystem_add_ns ...passed 00:04:07.671 Test: test_spdk_nvmf_subsystem_add_fdp_ns ...passed 00:04:07.671 Test: test_spdk_nvmf_subsystem_set_sn ...passed 00:04:07.671 Test: test_spdk_nvmf_ns_visible ...passed 00:04:07.671 Test: test_reservation_register ...passed 00:04:07.671 Test: test_reservation_register_with_ptpl ...[2024-07-14 21:03:19.125739] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 207:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io.spdk:�subsystem1". Label names must contain only valid utf-8. 00:04:07.671 [2024-07-14 21:03:19.125755] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2016-06.io.spdk:�subsystem1' is invalid 00:04:07.671 [2024-07-14 21:03:19.125775] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa": uuid is not the correct length 00:04:07.671 [2024-07-14 21:03:19.125789] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b6406-0fc8-4779-80ca-4dca14bda0d2aaaa' is invalid 00:04:07.671 [2024-07-14 21:03:19.125805] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:04:07.671 [2024-07-14 21:03:19.125819] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9b64-060fc8-4779-80ca-4dca14bda0d2' is invalid 00:04:07.671 [2024-07-14 21:03:19.125835] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 102:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2": uuid is not formatted correctly 00:04:07.671 [2024-07-14 21:03:19.125850] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 233:spdk_nvmf_subsystem_create: *ERROR*: Subsystem NQN 'nqn.2014-08.org.nvmexpress:uuid:ff9hg406-0fc8-4779-80ca-4dca14bda0d2' is invalid 00:04:07.671 [2024-07-14 21:03:19.125919] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2054:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 5 already in use 00:04:07.671 [2024-07-14 21:03:19.125937] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2027:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Invalid NSID 4294967295 00:04:07.671 [2024-07-14 21:03:19.125976] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2158:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem with id: 0 can only add FDP namespace. 00:04:07.671 [2024-07-14 21:03:19.126025] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "": length 0 < min 11 00:04:07.671 [2024-07-14 21:03:19.126111] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:07.671 [2024-07-14 21:03:19.126134] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3160:nvmf_ns_reservation_register: *ERROR*: No registrant 00:04:07.671 passed 00:04:07.671 Test: test_reservation_acquire_preempt_1 ...passed 00:04:07.671 Test: test_reservation_acquire_release_with_ptpl ...passed 00:04:07.671 Test: test_reservation_release ...passed 00:04:07.671 Test: test_reservation_unregister_notification ...passed 00:04:07.671 Test: test_reservation_release_notification ...passed 00:04:07.671 Test: test_reservation_release_notification_write_exclusive ...passed 00:04:07.671 Test: test_reservation_clear_notification ...passed 00:04:07.671 Test: test_reservation_preempt_notification ...[2024-07-14 21:03:19.126405] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:07.671 [2024-07-14 21:03:19.126634] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:07.671 [2024-07-14 21:03:19.126677] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:07.671 [2024-07-14 21:03:19.126704] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:07.671 [2024-07-14 21:03:19.126729] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:07.671 [2024-07-14 21:03:19.126761] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:07.671 passed 00:04:07.671 Test: test_spdk_nvmf_ns_event ...passed 00:04:07.671 Test: test_nvmf_ns_reservation_add_remove_registrant ...passed 00:04:07.671 Test: test_nvmf_subsystem_add_ctrlr ...passed 00:04:07.671 Test: test_spdk_nvmf_subsystem_add_host ...passed 00:04:07.671 Test: test_nvmf_ns_reservation_report ...passed 00:04:07.671 Test: test_nvmf_nqn_is_valid ...passed 00:04:07.671 Test: test_nvmf_ns_reservation_restore ...passed 00:04:07.671 Test: test_nvmf_subsystem_state_change ...passed 00:04:07.671 Test: test_nvmf_reservation_custom_ops ...passed 00:04:07.671 00:04:07.671 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.671 suites 1 1 n/a 0 0 00:04:07.671 tests 24 24 24 0 0 00:04:07.671 asserts 499 499 499 0 n/a 00:04:07.671 00:04:07.671 Elapsed time = 0.000 seconds 00:04:07.671 [2024-07-14 21:03:19.126786] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3104:nvmf_ns_reservation_register: *ERROR*: The same host already register a key with 0xa1 00:04:07.671 [2024-07-14 21:03:19.126907] /home/vagrant/spdk_repo/spdk/lib/nvmf/transport.c: 265:nvmf_transport_create: *ERROR*: max_aq_depth 0 is less than minimum defined by NVMf spec, use min value 00:04:07.671 [2024-07-14 21:03:19.126936] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to transport_ut transport 00:04:07.671 [2024-07-14 21:03:19.126969] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:3466:nvmf_ns_reservation_report: *ERROR*: NVMeoF uses extended controller data structure, please set EDS bit in cdw11 and try again 00:04:07.671 [2024-07-14 21:03:19.127001] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 85:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.": length 4 < min 11 00:04:07.671 [2024-07-14 21:03:19.127018] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 97:nvmf_nqn_is_valid: *ERROR*: Invalid NQN "nqn.2014-08.org.nvmexpress:uuid:7f26a605-4224-11ef-aa83-81fbc7dfef5": uuid is not the correct length 00:04:07.671 [2024-07-14 21:03:19.127033] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c: 146:nvmf_nqn_is_valid: *ERROR*: Invalid domain name in NQN "nqn.2016-06.io...spdk:cnode1". Label names must start with a letter. 00:04:07.671 [2024-07-14 21:03:19.127075] /home/vagrant/spdk_repo/spdk/lib/nvmf/subsystem.c:2659:nvmf_ns_reservation_restore: *ERROR*: Existing bdev UUID is not same with configuration file 00:04:07.671 21:03:19 unittest.unittest_nvmf -- unit/unittest.sh@112 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/tcp.c/tcp_ut 00:04:07.671 00:04:07.671 00:04:07.671 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.671 http://cunit.sourceforge.net/ 00:04:07.671 00:04:07.671 00:04:07.671 Suite: nvmf 00:04:07.671 Test: test_nvmf_tcp_create ...[2024-07-14 21:03:19.139910] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c: 745:nvmf_tcp_create: *ERROR*: Unsupported IO Unit size specified, 16 bytes 00:04:07.671 passed 00:04:07.671 Test: test_nvmf_tcp_destroy ...passed 00:04:07.671 Test: test_nvmf_tcp_poll_group_create ...passed 00:04:07.671 Test: test_nvmf_tcp_send_c2h_data ...passed 00:04:07.671 Test: test_nvmf_tcp_h2c_data_hdr_handle ...passed 00:04:07.671 Test: test_nvmf_tcp_in_capsule_data_handle ...passed 00:04:07.671 Test: test_nvmf_tcp_qpair_init_mem_resource ...passed 00:04:07.671 Test: test_nvmf_tcp_send_c2h_term_req ...passed 00:04:07.671 Test: test_nvmf_tcp_send_capsule_resp_pdu ...passed 00:04:07.671 Test: test_nvmf_tcp_icreq_handle ...[2024-07-14 21:03:19.151386] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:07.671 [2024-07-14 21:03:19.151410] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b858 is same with the state(5) to be set 00:04:07.671 [2024-07-14 21:03:19.151420] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b858 is same with the state(5) to be set 00:04:07.671 [2024-07-14 21:03:19.151429] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:07.671 [2024-07-14 21:03:19.151437] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b858 is same with the state(5) to be set 00:04:07.671 [2024-07-14 21:03:19.151466] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:04:07.671 [2024-07-14 21:03:19.151476] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:07.671 [2024-07-14 21:03:19.151484] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b760 is same with the state(5) to be set 00:04:07.672 [2024-07-14 21:03:19.151492] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2122:nvmf_tcp_icreq_handle: *ERROR*: Expected ICReq PFV 0, got 1 00:04:07.672 [2024-07-14 21:03:19.151500] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b760 is same with the state(5) to be set 00:04:07.672 [2024-07-14 21:03:19.151508] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:07.672 [2024-07-14 21:03:19.151516] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b760 is same with the state(5) to be set 00:04:07.672 passed 00:04:07.672 Test: test_nvmf_tcp_check_xfer_type ...passed 00:04:07.672 Test: test_nvmf_tcp_invalid_sgl ...passed 00:04:07.672 Test: test_nvmf_tcp_pdu_ch_handle ...[2024-07-14 21:03:19.151525] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write IC_RESP to socket: rc=0, errno=0 00:04:07.672 [2024-07-14 21:03:19.151533] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b760 is same with the state(5) to be set 00:04:07.672 [2024-07-14 21:03:19.151574] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2518:nvmf_tcp_req_parse_sgl: *ERROR*: SGL length 0x1001 exceeds max io size 0x1000 00:04:07.672 [2024-07-14 21:03:19.151583] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:07.672 [2024-07-14 21:03:19.151591] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b760 is same with the state(5) to be set 00:04:07.672 [2024-07-14 21:03:19.151601] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2249:nvmf_tcp_pdu_ch_handle: *ERROR*: Already received ICreq PDU, and reject this pdu=0x82106afe8 00:04:07.672 [2024-07-14 21:03:19.151610] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:07.672 [2024-07-14 21:03:19.151617] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b858 is same with the state(5) to be set 00:04:07.672 [2024-07-14 21:03:19.151626] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2308:nvmf_tcp_pdu_ch_handle: *ERROR*: PDU type=0x00, Expected ICReq header length 128, got 0 on tqpair=0x82106b858 00:04:07.672 [2024-07-14 21:03:19.151634] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:07.672 [2024-07-14 21:03:19.151642] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b858 is same with the state(5) to be set 00:04:07.672 [2024-07-14 21:03:19.151655] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2259:nvmf_tcp_pdu_ch_handle: *ERROR*: The TCP/IP connection is not negotiated 00:04:07.672 [2024-07-14 21:03:19.151663] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:07.672 [2024-07-14 21:03:19.151672] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b858 is same with the state(5) to be set 00:04:07.672 [2024-07-14 21:03:19.151680] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:2298:nvmf_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x05 00:04:07.672 [2024-07-14 21:03:19.151688] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:07.672 [2024-07-14 21:03:19.151696] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b858 is same with the state(5) to be set 00:04:07.672 [2024-07-14 21:03:19.151704] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:07.672 [2024-07-14 21:03:19.151712] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b858 is same with the state(5) to be set 00:04:07.672 [2024-07-14 21:03:19.151720] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:07.672 [2024-07-14 21:03:19.151728] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b858 is same with the state(5) to be set 00:04:07.672 [2024-07-14 21:03:19.151736] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:07.672 [2024-07-14 21:03:19.151744] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b858 is same with the state(5) to be set 00:04:07.672 [2024-07-14 21:03:19.151752] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:07.672 passed 00:04:07.672 Test: test_nvmf_tcp_tls_add_remove_credentials ...[2024-07-14 21:03:19.151760] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b858 is same with the state(5) to be set 00:04:07.672 [2024-07-14 21:03:19.151784] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:07.672 [2024-07-14 21:03:19.151793] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b858 is same with the state(5) to be set 00:04:07.672 [2024-07-14 21:03:19.151801] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1088:_tcp_write_pdu: *ERROR*: Could not write TERM_REQ to socket: rc=0, errno=0 00:04:07.672 [2024-07-14 21:03:19.151809] /home/vagrant/spdk_repo/spdk/lib/nvmf/tcp.c:1608:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x82106b858 is same with the state(5) to be set 00:04:07.672 passed 00:04:07.672 Test: test_nvmf_tcp_tls_generate_psk_id ...passed 00:04:07.672 Test: test_nvmf_tcp_tls_generate_retained_psk ...[2024-07-14 21:03:19.158001] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 591:nvme_tcp_generate_psk_identity: *ERROR*: Out buffer too small! 00:04:07.672 [2024-07-14 21:03:19.158022] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 602:nvme_tcp_generate_psk_identity: *ERROR*: Unknown cipher suite requested! 00:04:07.672 passed 00:04:07.672 Test: test_nvmf_tcp_tls_generate_tls_psk ...passed 00:04:07.672 00:04:07.672 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.672 suites 1 1 n/a 0 0 00:04:07.672 tests 17 17 17 0 0 00:04:07.672 asserts 222 222 222 0 n/a 00:04:07.672 00:04:07.672 Elapsed time = 0.023 seconds 00:04:07.672 [2024-07-14 21:03:19.158133] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 658:nvme_tcp_derive_retained_psk: *ERROR*: Unknown PSK hash requested! 00:04:07.672 [2024-07-14 21:03:19.158147] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 663:nvme_tcp_derive_retained_psk: *ERROR*: Insufficient buffer size for out key! 00:04:07.672 [2024-07-14 21:03:19.158210] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 732:nvme_tcp_derive_tls_psk: *ERROR*: Unknown cipher suite requested! 00:04:07.672 [2024-07-14 21:03:19.158220] /home/vagrant/spdk_repo/spdk/include/spdk_internal/nvme_tcp.h: 756:nvme_tcp_derive_tls_psk: *ERROR*: Insufficient buffer size for out key! 00:04:07.672 21:03:19 unittest.unittest_nvmf -- unit/unittest.sh@113 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/nvmf.c/nvmf_ut 00:04:07.672 00:04:07.672 00:04:07.672 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.672 http://cunit.sourceforge.net/ 00:04:07.672 00:04:07.672 00:04:07.672 Suite: nvmf 00:04:07.672 Test: test_nvmf_tgt_create_poll_group ...passed 00:04:07.672 00:04:07.672 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.672 suites 1 1 n/a 0 0 00:04:07.672 tests 1 1 1 0 0 00:04:07.672 asserts 17 17 17 0 n/a 00:04:07.672 00:04:07.672 Elapsed time = 0.008 seconds 00:04:07.672 00:04:07.672 real 0m0.070s 00:04:07.672 user 0m0.011s 00:04:07.672 sys 0m0.057s 00:04:07.672 ************************************ 00:04:07.672 END TEST unittest_nvmf 00:04:07.672 ************************************ 00:04:07.672 21:03:19 unittest.unittest_nvmf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.672 21:03:19 unittest.unittest_nvmf -- common/autotest_common.sh@10 -- # set +x 00:04:07.672 21:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:07.672 21:03:19 unittest -- unit/unittest.sh@262 -- # grep -q '#define SPDK_CONFIG_FC 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:07.672 21:03:19 unittest -- unit/unittest.sh@267 -- # grep -q '#define SPDK_CONFIG_RDMA 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:07.672 21:03:19 unittest -- unit/unittest.sh@268 -- # run_test unittest_nvmf_rdma /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:04:07.672 21:03:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.672 21:03:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.672 21:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:07.932 ************************************ 00:04:07.932 START TEST unittest_nvmf_rdma 00:04:07.932 ************************************ 00:04:07.932 21:03:19 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/nvmf/rdma.c/rdma_ut 00:04:07.932 00:04:07.932 00:04:07.932 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.932 http://cunit.sourceforge.net/ 00:04:07.932 00:04:07.932 00:04:07.932 Suite: nvmf 00:04:07.932 Test: test_spdk_nvmf_rdma_request_parse_sgl ...[2024-07-14 21:03:19.218362] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1864:nvmf_rdma_request_parse_sgl: *ERROR*: SGL length 0x40000 exceeds max io size 0x20000 00:04:07.932 [2024-07-14 21:03:19.218546] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x1000 exceeds capsule length 0x0 00:04:07.932 passed 00:04:07.932 Test: test_spdk_nvmf_rdma_request_process ...passed 00:04:07.932 Test: test_nvmf_rdma_get_optimal_poll_group ...passed 00:04:07.932 Test: test_spdk_nvmf_rdma_request_parse_sgl_with_md ...passed 00:04:07.932 Test: test_nvmf_rdma_opts_init ...passed 00:04:07.932 Test: test_nvmf_rdma_request_free_data ...passed 00:04:07.932 Test: test_nvmf_rdma_resources_create ...[2024-07-14 21:03:19.218563] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c:1914:nvmf_rdma_request_parse_sgl: *ERROR*: In-capsule data length 0x2000 exceeds capsule length 0x1000 00:04:07.932 passed 00:04:07.932 Test: test_nvmf_rdma_qpair_compare ...passed 00:04:07.932 Test: test_nvmf_rdma_resize_cq ...[2024-07-14 21:03:19.219366] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 955:nvmf_rdma_resize_cq: *ERROR*: iWARP doesn't support CQ resize. Current capacity 20, required 0 00:04:07.932 Using CQ of insufficient size may lead to CQ overrun 00:04:07.932 [2024-07-14 21:03:19.219388] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 960:nvmf_rdma_resize_cq: *ERROR*: RDMA CQE requirement (26) exceeds device max_cqe limitation (3) 00:04:07.932 passed 00:04:07.932 00:04:07.932 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.932 suites 1 1 n/a 0 0 00:04:07.932 tests 9 9 9 0 0 00:04:07.932 asserts 579 579 579 0 n/a 00:04:07.932 00:04:07.932 Elapsed time = 0.000 seconds 00:04:07.932 [2024-07-14 21:03:19.219444] /home/vagrant/spdk_repo/spdk/lib/nvmf/rdma.c: 967:nvmf_rdma_resize_cq: *ERROR*: RDMA CQ resize failed: errno 0: No error: 0 00:04:07.932 00:04:07.932 real 0m0.006s 00:04:07.932 user 0m0.000s 00:04:07.932 sys 0m0.005s 00:04:07.932 21:03:19 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.932 21:03:19 unittest.unittest_nvmf_rdma -- common/autotest_common.sh@10 -- # set +x 00:04:07.932 ************************************ 00:04:07.932 END TEST unittest_nvmf_rdma 00:04:07.932 ************************************ 00:04:07.932 21:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:07.932 21:03:19 unittest -- unit/unittest.sh@271 -- # grep -q '#define SPDK_CONFIG_VFIO_USER 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:07.932 21:03:19 unittest -- unit/unittest.sh@275 -- # run_test unittest_scsi unittest_scsi 00:04:07.932 21:03:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.932 21:03:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.932 21:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:07.932 ************************************ 00:04:07.932 START TEST unittest_scsi 00:04:07.932 ************************************ 00:04:07.932 21:03:19 unittest.unittest_scsi -- common/autotest_common.sh@1123 -- # unittest_scsi 00:04:07.932 21:03:19 unittest.unittest_scsi -- unit/unittest.sh@117 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/dev.c/dev_ut 00:04:07.932 00:04:07.932 00:04:07.932 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.932 http://cunit.sourceforge.net/ 00:04:07.932 00:04:07.932 00:04:07.932 Suite: dev_suite 00:04:07.932 Test: dev_destruct_null_dev ...passed 00:04:07.932 Test: dev_destruct_zero_luns ...passed 00:04:07.932 Test: dev_destruct_null_lun ...passed 00:04:07.932 Test: dev_destruct_success ...passed 00:04:07.932 Test: dev_construct_num_luns_zero ...passed 00:04:07.932 Test: dev_construct_no_lun_zero ...[2024-07-14 21:03:19.267189] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 228:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUNs specified 00:04:07.932 [2024-07-14 21:03:19.267461] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 241:spdk_scsi_dev_construct_ext: *ERROR*: device Name: no LUN 0 specified 00:04:07.932 passed 00:04:07.932 Test: dev_construct_null_lun ...passed 00:04:07.932 Test: dev_construct_name_too_long ...[2024-07-14 21:03:19.267488] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 248:spdk_scsi_dev_construct_ext: *ERROR*: NULL spdk_scsi_lun for LUN 0 00:04:07.933 [2024-07-14 21:03:19.267509] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 223:spdk_scsi_dev_construct_ext: *ERROR*: device xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx: name longer than maximum allowed length 255 00:04:07.933 passed 00:04:07.933 Test: dev_construct_success ...passed 00:04:07.933 Test: dev_construct_success_lun_zero_not_first ...passed 00:04:07.933 Test: dev_queue_mgmt_task_success ...passed 00:04:07.933 Test: dev_queue_task_success ...passed 00:04:07.933 Test: dev_stop_success ...passed 00:04:07.933 Test: dev_add_port_max_ports ...passed 00:04:07.933 Test: dev_add_port_construct_failure1 ...passed 00:04:07.933 Test: dev_add_port_construct_failure2 ...passed 00:04:07.933 Test: dev_add_port_success1 ...passed 00:04:07.933 Test: dev_add_port_success2 ...passed 00:04:07.933 Test: dev_add_port_success3 ...passed 00:04:07.933 Test: dev_find_port_by_id_num_ports_zero ...passed 00:04:07.933 Test: dev_find_port_by_id_id_not_found_failure ...passed 00:04:07.933 Test: dev_find_port_by_id_success ...passed 00:04:07.933 Test: dev_add_lun_bdev_not_found ...passed 00:04:07.933 Test: dev_add_lun_no_free_lun_id ...[2024-07-14 21:03:19.267565] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 315:spdk_scsi_dev_add_port: *ERROR*: device already has 4 ports 00:04:07.933 [2024-07-14 21:03:19.267585] /home/vagrant/spdk_repo/spdk/lib/scsi/port.c: 49:scsi_port_construct: *ERROR*: port name too long 00:04:07.933 [2024-07-14 21:03:19.267604] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 321:spdk_scsi_dev_add_port: *ERROR*: device already has port(1) 00:04:07.933 passed 00:04:07.933 Test: dev_add_lun_success1 ...passed 00:04:07.933 Test: dev_add_lun_success2 ...passed 00:04:07.933 Test: dev_check_pending_tasks ...passed 00:04:07.933 Test: dev_iterate_luns ...passed 00:04:07.933 Test: dev_find_free_lun ...[2024-07-14 21:03:19.267888] /home/vagrant/spdk_repo/spdk/lib/scsi/dev.c: 159:spdk_scsi_dev_add_lun_ext: *ERROR*: Free LUN ID is not found 00:04:07.933 passed 00:04:07.933 00:04:07.933 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.933 suites 1 1 n/a 0 0 00:04:07.933 tests 29 29 29 0 0 00:04:07.933 asserts 97 97 97 0 n/a 00:04:07.933 00:04:07.933 Elapsed time = 0.000 seconds 00:04:07.933 21:03:19 unittest.unittest_scsi -- unit/unittest.sh@118 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/lun.c/lun_ut 00:04:07.933 00:04:07.933 00:04:07.933 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.933 http://cunit.sourceforge.net/ 00:04:07.933 00:04:07.933 00:04:07.933 Suite: lun_suite 00:04:07.933 Test: lun_task_mgmt_execute_abort_task_not_supported ...passed 00:04:07.933 Test: lun_task_mgmt_execute_abort_task_all_not_supported ...passed 00:04:07.933 Test: lun_task_mgmt_execute_lun_reset ...passed 00:04:07.933 Test: lun_task_mgmt_execute_target_reset ...passed 00:04:07.933 Test: lun_task_mgmt_execute_invalid_case ...[2024-07-14 21:03:19.275659] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task not supported 00:04:07.933 [2024-07-14 21:03:19.275937] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: abort task set not supported 00:04:07.933 passed 00:04:07.933 Test: lun_append_task_null_lun_task_cdb_spc_inquiry ...passed 00:04:07.933 Test: lun_append_task_null_lun_alloc_len_lt_4096 ...passed 00:04:07.933 Test: lun_append_task_null_lun_not_supported ...passed 00:04:07.933 Test: lun_execute_scsi_task_pending ...passed 00:04:07.933 Test: lun_execute_scsi_task_complete ...[2024-07-14 21:03:19.275975] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 169:_scsi_lun_execute_mgmt_task: *ERROR*: unknown task not supported 00:04:07.933 passed 00:04:07.933 Test: lun_execute_scsi_task_resize ...passed 00:04:07.933 Test: lun_destruct_success ...passed 00:04:07.933 Test: lun_construct_null_ctx ...passed 00:04:07.933 Test: lun_construct_success ...passed 00:04:07.933 Test: lun_reset_task_wait_scsi_task_complete ...passed 00:04:07.933 Test: lun_reset_task_suspend_scsi_task ...passed 00:04:07.933 Test: lun_check_pending_tasks_only_for_specific_initiator ...passed 00:04:07.933 Test: abort_pending_mgmt_tasks_when_lun_is_removed ...passed 00:04:07.933 00:04:07.933 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.933 suites 1 1 n/a 0 0 00:04:07.933 tests 18 18 18 0 0 00:04:07.933 asserts 153 153 153 0 n/a 00:04:07.933 00:04:07.933 Elapsed time = 0.000 seconds 00:04:07.933 [2024-07-14 21:03:19.276051] /home/vagrant/spdk_repo/spdk/lib/scsi/lun.c: 432:scsi_lun_construct: *ERROR*: bdev_name must be non-NULL 00:04:07.933 21:03:19 unittest.unittest_scsi -- unit/unittest.sh@119 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi.c/scsi_ut 00:04:07.933 00:04:07.933 00:04:07.933 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.933 http://cunit.sourceforge.net/ 00:04:07.933 00:04:07.933 00:04:07.933 Suite: scsi_suite 00:04:07.933 Test: scsi_init ...passed 00:04:07.933 00:04:07.933 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.933 suites 1 1 n/a 0 0 00:04:07.933 tests 1 1 1 0 0 00:04:07.933 asserts 1 1 1 0 n/a 00:04:07.933 00:04:07.933 Elapsed time = 0.000 seconds 00:04:07.933 21:03:19 unittest.unittest_scsi -- unit/unittest.sh@120 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_bdev.c/scsi_bdev_ut 00:04:07.933 00:04:07.933 00:04:07.933 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.933 http://cunit.sourceforge.net/ 00:04:07.933 00:04:07.933 00:04:07.933 Suite: translation_suite 00:04:07.933 Test: mode_select_6_test ...passed 00:04:07.933 Test: mode_select_6_test2 ...passed 00:04:07.933 Test: mode_sense_6_test ...passed 00:04:07.933 Test: mode_sense_10_test ...passed 00:04:07.933 Test: inquiry_evpd_test ...passed 00:04:07.933 Test: inquiry_standard_test ...passed 00:04:07.933 Test: inquiry_overflow_test ...passed 00:04:07.933 Test: task_complete_test ...passed 00:04:07.933 Test: lba_range_test ...passed 00:04:07.933 Test: xfer_len_test ...passed 00:04:07.933 Test: xfer_test ...passed 00:04:07.933 Test: scsi_name_padding_test ...passed 00:04:07.933 Test: get_dif_ctx_test ...passed 00:04:07.933 Test: unmap_split_test ...passed 00:04:07.933 00:04:07.934 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.934 suites 1 1 n/a 0 0 00:04:07.934 tests 14 14 14 0 0 00:04:07.934 asserts 1205 1205 1205 0 n/a 00:04:07.934 00:04:07.934 Elapsed time = 0.000 seconds 00:04:07.934 [2024-07-14 21:03:19.287342] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_bdev.c:1271:bdev_scsi_readwrite: *ERROR*: xfer_len 8193 > maximum transfer length 8192 00:04:07.934 21:03:19 unittest.unittest_scsi -- unit/unittest.sh@121 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/scsi/scsi_pr.c/scsi_pr_ut 00:04:07.934 00:04:07.934 00:04:07.934 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.934 http://cunit.sourceforge.net/ 00:04:07.934 00:04:07.934 00:04:07.934 Suite: reservation_suite 00:04:07.934 Test: test_reservation_register ...passed 00:04:07.934 Test: test_reservation_reserve ...passed 00:04:07.934 Test: test_all_registrant_reservation_reserve ...passed 00:04:07.934 Test: test_all_registrant_reservation_access ...passed 00:04:07.934 Test: test_reservation_preempt_non_all_regs ...passed 00:04:07.934 Test: test_reservation_preempt_all_regs ...passed 00:04:07.934 Test: test_reservation_cmds_conflict ...[2024-07-14 21:03:19.293547] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:07.934 [2024-07-14 21:03:19.293798] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:07.934 [2024-07-14 21:03:19.293824] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 215:scsi_pr_out_reserve: *ERROR*: Only 1 holder is allowed for type 1 00:04:07.934 [2024-07-14 21:03:19.293842] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 210:scsi_pr_out_reserve: *ERROR*: Reservation type doesn't match 00:04:07.934 [2024-07-14 21:03:19.293866] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:07.934 [2024-07-14 21:03:19.293906] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:07.934 [2024-07-14 21:03:19.293936] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0x8 00:04:07.934 [2024-07-14 21:03:19.293953] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 866:scsi_pr_check: *ERROR*: CHECK: All Registrants reservation type reject command 0xaa 00:04:07.934 [2024-07-14 21:03:19.293975] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:07.934 [2024-07-14 21:03:19.293992] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 464:scsi_pr_out_preempt: *ERROR*: Zeroed sa_rkey 00:04:07.934 [2024-07-14 21:03:19.294016] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:07.934 [2024-07-14 21:03:19.294043] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:07.934 [2024-07-14 21:03:19.294060] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 858:scsi_pr_check: *ERROR*: CHECK: Registrants only reservation type reject command 0x2a 00:04:07.934 [2024-07-14 21:03:19.294075] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:04:07.934 [2024-07-14 21:03:19.294089] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:04:07.934 [2024-07-14 21:03:19.294103] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x28 00:04:07.934 [2024-07-14 21:03:19.294118] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 852:scsi_pr_check: *ERROR*: CHECK: Exclusive Access reservation type rejects command 0x2a 00:04:07.934 passed 00:04:07.934 Test: test_scsi2_reserve_release ...passed 00:04:07.934 Test: test_pr_with_scsi2_reserve_release ...passed 00:04:07.934 00:04:07.934 [2024-07-14 21:03:19.294149] /home/vagrant/spdk_repo/spdk/lib/scsi/scsi_pr.c: 279:scsi_pr_out_register: *ERROR*: Reservation key 0xa1 don't match registrant's key 0xa 00:04:07.934 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.934 suites 1 1 n/a 0 0 00:04:07.934 tests 9 9 9 0 0 00:04:07.934 asserts 344 344 344 0 n/a 00:04:07.934 00:04:07.934 Elapsed time = 0.000 seconds 00:04:07.934 00:04:07.934 real 0m0.033s 00:04:07.934 user 0m0.005s 00:04:07.934 sys 0m0.027s 00:04:07.934 21:03:19 unittest.unittest_scsi -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.934 ************************************ 00:04:07.934 END TEST unittest_scsi 00:04:07.934 ************************************ 00:04:07.934 21:03:19 unittest.unittest_scsi -- common/autotest_common.sh@10 -- # set +x 00:04:07.934 21:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:07.934 21:03:19 unittest -- unit/unittest.sh@278 -- # uname -s 00:04:07.934 21:03:19 unittest -- unit/unittest.sh@278 -- # '[' FreeBSD = Linux ']' 00:04:07.934 21:03:19 unittest -- unit/unittest.sh@281 -- # run_test unittest_thread /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:04:07.934 21:03:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.934 21:03:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.934 21:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:07.934 ************************************ 00:04:07.934 START TEST unittest_thread 00:04:07.934 ************************************ 00:04:07.934 21:03:19 unittest.unittest_thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/thread.c/thread_ut 00:04:07.934 00:04:07.934 00:04:07.934 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.934 http://cunit.sourceforge.net/ 00:04:07.934 00:04:07.934 00:04:07.934 Suite: io_channel 00:04:07.934 Test: thread_alloc ...passed 00:04:07.934 Test: thread_send_msg ...passed 00:04:07.934 Test: thread_poller ...passed 00:04:07.934 Test: poller_pause ...passed 00:04:07.934 Test: thread_for_each ...passed 00:04:07.934 Test: for_each_channel_remove ...passed 00:04:07.934 Test: for_each_channel_unreg ...[2024-07-14 21:03:19.347192] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2178:spdk_io_device_register: *ERROR*: io_device 0x82082d224 already registered (old:0x1f5395467000 new:0x1f5395467180) 00:04:07.934 passed 00:04:07.934 Test: thread_name ...passed 00:04:07.934 Test: channel ...[2024-07-14 21:03:19.348166] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:2311:spdk_get_io_channel: *ERROR*: could not find io_device 0x228838 00:04:07.934 passed 00:04:07.934 Test: channel_destroy_races ...passed 00:04:07.934 Test: thread_exit_test ...[2024-07-14 21:03:19.348919] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 640:thread_exit: *ERROR*: thread 0x1f539542ca80 got timeout, and move it to the exited state forcefully 00:04:07.934 passed 00:04:07.934 Test: thread_update_stats_test ...passed 00:04:07.934 Test: nested_channel ...passed 00:04:07.934 Test: device_unregister_and_thread_exit_race ...passed 00:04:07.934 Test: cache_closest_timed_poller ...passed 00:04:07.934 Test: multi_timed_pollers_have_same_expiration ...passed 00:04:07.934 Test: io_device_lookup ...passed 00:04:07.934 Test: spdk_spin ...[2024-07-14 21:03:19.350298] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3082:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:04:07.934 [2024-07-14 21:03:19.350327] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x82082d220 00:04:07.934 [2024-07-14 21:03:19.350348] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3120:spdk_spin_held: *ERROR*: unrecoverable spinlock error 1: Not an SPDK thread (thread != ((void *)0)) 00:04:07.934 [2024-07-14 21:03:19.350560] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:04:07.934 [2024-07-14 21:03:19.350580] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x82082d220 00:04:07.934 [2024-07-14 21:03:19.350589] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:04:07.934 [2024-07-14 21:03:19.350598] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x82082d220 00:04:07.934 [2024-07-14 21:03:19.350611] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3103:spdk_spin_unlock: *ERROR*: unrecoverable spinlock error 3: Unlock on wrong SPDK thread (thread == sspin->thread) 00:04:07.934 passed 00:04:07.934 Test: for_each_channel_and_thread_exit_race ...[2024-07-14 21:03:19.350619] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x82082d220 00:04:07.935 [2024-07-14 21:03:19.350631] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3064:spdk_spin_destroy: *ERROR*: unrecoverable spinlock error 5: Destroying a held spinlock (sspin->thread == ((void *)0)) 00:04:07.935 [2024-07-14 21:03:19.350647] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x82082d220 00:04:07.935 passed 00:04:07.935 Test: for_each_thread_and_thread_exit_race ...passed 00:04:07.935 00:04:07.935 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.935 suites 1 1 n/a 0 0 00:04:07.935 tests 20 20 20 0 0 00:04:07.935 asserts 409 409 409 0 n/a 00:04:07.935 00:04:07.935 Elapsed time = 0.008 seconds 00:04:07.935 00:04:07.935 real 0m0.013s 00:04:07.935 user 0m0.010s 00:04:07.935 sys 0m0.004s 00:04:07.935 21:03:19 unittest.unittest_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.935 21:03:19 unittest.unittest_thread -- common/autotest_common.sh@10 -- # set +x 00:04:07.935 ************************************ 00:04:07.935 END TEST unittest_thread 00:04:07.935 ************************************ 00:04:07.935 21:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:07.935 21:03:19 unittest -- unit/unittest.sh@282 -- # run_test unittest_iobuf /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:04:07.935 21:03:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.935 21:03:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.935 21:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:07.935 ************************************ 00:04:07.935 START TEST unittest_iobuf 00:04:07.935 ************************************ 00:04:07.935 21:03:19 unittest.unittest_iobuf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/thread/iobuf.c/iobuf_ut 00:04:07.935 00:04:07.935 00:04:07.935 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.935 http://cunit.sourceforge.net/ 00:04:07.935 00:04:07.935 00:04:07.935 Suite: io_channel 00:04:07.935 Test: iobuf ...passed 00:04:07.935 Test: iobuf_cache ...[2024-07-14 21:03:19.402502] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf small buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:04:07.935 [2024-07-14 21:03:19.402805] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:04:07.935 [2024-07-14 21:03:19.402875] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 374:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module0' iobuf large buffer cache at 4/5 entries. You may need to increase spdk_iobuf_opts.large_pool_count (4) 00:04:07.935 [2024-07-14 21:03:19.402895] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 376:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:04:07.935 [2024-07-14 21:03:19.402922] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 362:spdk_iobuf_channel_init: *ERROR*: Failed to populate 'ut_module1' iobuf small buffer cache at 0/4 entries. You may need to increase spdk_iobuf_opts.small_pool_count (4) 00:04:07.935 [2024-07-14 21:03:19.402950] /home/vagrant/spdk_repo/spdk/lib/thread/iobuf.c: 364:spdk_iobuf_channel_init: *ERROR*: See scripts/calc-iobuf.py for guidance on how to calculate this value. 00:04:07.935 passed 00:04:07.935 00:04:07.935 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.935 suites 1 1 n/a 0 0 00:04:07.935 tests 2 2 2 0 0 00:04:07.935 asserts 107 107 107 0 n/a 00:04:07.935 00:04:07.935 Elapsed time = 0.000 seconds 00:04:07.935 00:04:07.935 real 0m0.007s 00:04:07.935 user 0m0.006s 00:04:07.935 sys 0m0.005s 00:04:07.935 21:03:19 unittest.unittest_iobuf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:07.935 21:03:19 unittest.unittest_iobuf -- common/autotest_common.sh@10 -- # set +x 00:04:07.935 ************************************ 00:04:07.935 END TEST unittest_iobuf 00:04:07.935 ************************************ 00:04:07.935 21:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:07.935 21:03:19 unittest -- unit/unittest.sh@283 -- # run_test unittest_util unittest_util 00:04:07.935 21:03:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:07.935 21:03:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:07.935 21:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:07.935 ************************************ 00:04:07.935 START TEST unittest_util 00:04:07.935 ************************************ 00:04:07.935 21:03:19 unittest.unittest_util -- common/autotest_common.sh@1123 -- # unittest_util 00:04:07.935 21:03:19 unittest.unittest_util -- unit/unittest.sh@134 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/base64.c/base64_ut 00:04:07.935 00:04:07.935 00:04:07.935 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.935 http://cunit.sourceforge.net/ 00:04:07.935 00:04:07.935 00:04:07.935 Suite: base64 00:04:07.935 Test: test_base64_get_encoded_strlen ...passed 00:04:07.935 Test: test_base64_get_decoded_len ...passed 00:04:07.935 Test: test_base64_encode ...passed 00:04:07.935 Test: test_base64_decode ...passed 00:04:07.935 Test: test_base64_urlsafe_encode ...passed 00:04:07.935 Test: test_base64_urlsafe_decode ...passed 00:04:07.935 00:04:07.935 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.935 suites 1 1 n/a 0 0 00:04:07.935 tests 6 6 6 0 0 00:04:07.935 asserts 112 112 112 0 n/a 00:04:07.935 00:04:07.935 Elapsed time = 0.000 seconds 00:04:07.935 21:03:19 unittest.unittest_util -- unit/unittest.sh@135 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/bit_array.c/bit_array_ut 00:04:07.935 00:04:07.935 00:04:07.935 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.935 http://cunit.sourceforge.net/ 00:04:07.935 00:04:07.935 00:04:07.935 Suite: bit_array 00:04:07.935 Test: test_1bit ...passed 00:04:07.935 Test: test_64bit ...passed 00:04:07.935 Test: test_find ...passed 00:04:07.935 Test: test_resize ...passed 00:04:07.935 Test: test_errors ...passed 00:04:07.935 Test: test_count ...passed 00:04:07.935 Test: test_mask_store_load ...passed 00:04:07.935 Test: test_mask_clear ...passed 00:04:07.935 00:04:07.935 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.935 suites 1 1 n/a 0 0 00:04:07.935 tests 8 8 8 0 0 00:04:07.935 asserts 5075 5075 5075 0 n/a 00:04:07.935 00:04:07.935 Elapsed time = 0.000 seconds 00:04:07.935 21:03:19 unittest.unittest_util -- unit/unittest.sh@136 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/cpuset.c/cpuset_ut 00:04:07.935 00:04:07.935 00:04:07.935 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.935 http://cunit.sourceforge.net/ 00:04:07.935 00:04:07.935 00:04:07.935 Suite: cpuset 00:04:07.935 Test: test_cpuset ...passed 00:04:07.935 Test: test_cpuset_parse ...[2024-07-14 21:03:19.457105] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 256:parse_list: *ERROR*: Unexpected end of core list '[' 00:04:07.935 [2024-07-14 21:03:19.457312] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[]' failed on character ']' 00:04:07.935 [2024-07-14 21:03:19.457332] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10--11]' failed on character '-' 00:04:07.935 [2024-07-14 21:03:19.457345] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 237:parse_list: *ERROR*: Invalid range of CPUs (11 > 10) 00:04:07.935 passed 00:04:07.935 Test: test_cpuset_fmt ...[2024-07-14 21:03:19.457357] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[10-11,]' failed on character ',' 00:04:07.935 [2024-07-14 21:03:19.457373] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 258:parse_list: *ERROR*: Parsing of core list '[,10-11]' failed on character ',' 00:04:07.935 [2024-07-14 21:03:19.457385] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 220:parse_list: *ERROR*: Core number 1025 is out of range in '[1025]' 00:04:07.935 [2024-07-14 21:03:19.457396] /home/vagrant/spdk_repo/spdk/lib/util/cpuset.c: 215:parse_list: *ERROR*: Conversion of core mask in '[184467440737095516150]' failed 00:04:07.935 passed 00:04:07.935 Test: test_cpuset_foreach ...passed 00:04:07.935 00:04:07.935 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.935 suites 1 1 n/a 0 0 00:04:07.935 tests 4 4 4 0 0 00:04:07.935 asserts 90 90 90 0 n/a 00:04:07.935 00:04:07.935 Elapsed time = 0.000 seconds 00:04:07.936 21:03:19 unittest.unittest_util -- unit/unittest.sh@137 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc16.c/crc16_ut 00:04:07.936 00:04:07.936 00:04:07.936 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.936 http://cunit.sourceforge.net/ 00:04:07.936 00:04:07.936 00:04:07.936 Suite: crc16 00:04:07.936 Test: test_crc16_t10dif ...passed 00:04:07.936 Test: test_crc16_t10dif_seed ...passed 00:04:07.936 Test: test_crc16_t10dif_copy ...passed 00:04:07.936 00:04:07.936 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.936 suites 1 1 n/a 0 0 00:04:07.936 tests 3 3 3 0 0 00:04:07.936 asserts 5 5 5 0 n/a 00:04:07.936 00:04:07.936 Elapsed time = 0.000 seconds 00:04:07.936 21:03:19 unittest.unittest_util -- unit/unittest.sh@138 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32_ieee.c/crc32_ieee_ut 00:04:07.936 00:04:07.936 00:04:07.936 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.936 http://cunit.sourceforge.net/ 00:04:07.936 00:04:07.936 00:04:07.936 Suite: crc32_ieee 00:04:07.936 Test: test_crc32_ieee ...passed 00:04:07.936 00:04:07.936 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.936 suites 1 1 n/a 0 0 00:04:07.936 tests 1 1 1 0 0 00:04:07.936 asserts 1 1 1 0 n/a 00:04:07.936 00:04:07.936 Elapsed time = 0.000 seconds 00:04:07.936 21:03:19 unittest.unittest_util -- unit/unittest.sh@139 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc32c.c/crc32c_ut 00:04:07.936 00:04:07.936 00:04:07.936 CUnit - A unit testing framework for C - Version 2.1-3 00:04:07.936 http://cunit.sourceforge.net/ 00:04:07.936 00:04:07.936 00:04:07.936 Suite: crc32c 00:04:07.936 Test: test_crc32c ...passed 00:04:07.936 Test: test_crc32c_nvme ...passed 00:04:07.936 00:04:07.936 Run Summary: Type Total Ran Passed Failed Inactive 00:04:07.936 suites 1 1 n/a 0 0 00:04:07.936 tests 2 2 2 0 0 00:04:07.936 asserts 16 16 16 0 n/a 00:04:07.936 00:04:07.936 Elapsed time = 0.000 seconds 00:04:07.936 21:03:19 unittest.unittest_util -- unit/unittest.sh@140 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/crc64.c/crc64_ut 00:04:08.198 00:04:08.198 00:04:08.198 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.198 http://cunit.sourceforge.net/ 00:04:08.198 00:04:08.198 00:04:08.198 Suite: crc64 00:04:08.198 Test: test_crc64_nvme ...passed 00:04:08.198 00:04:08.198 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.198 suites 1 1 n/a 0 0 00:04:08.198 tests 1 1 1 0 0 00:04:08.198 asserts 4 4 4 0 n/a 00:04:08.198 00:04:08.198 Elapsed time = 0.000 seconds 00:04:08.198 21:03:19 unittest.unittest_util -- unit/unittest.sh@141 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/string.c/string_ut 00:04:08.198 00:04:08.198 00:04:08.198 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.198 http://cunit.sourceforge.net/ 00:04:08.198 00:04:08.198 00:04:08.198 Suite: string 00:04:08.198 Test: test_parse_ip_addr ...passed 00:04:08.198 Test: test_str_chomp ...passed 00:04:08.198 Test: test_parse_capacity ...passed 00:04:08.198 Test: test_sprintf_append_realloc ...passed 00:04:08.198 Test: test_strtol ...passed 00:04:08.198 Test: test_strtoll ...passed 00:04:08.198 Test: test_strarray ...passed 00:04:08.198 Test: test_strcpy_replace ...passed 00:04:08.198 00:04:08.198 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.198 suites 1 1 n/a 0 0 00:04:08.198 tests 8 8 8 0 0 00:04:08.198 asserts 161 161 161 0 n/a 00:04:08.198 00:04:08.198 Elapsed time = 0.000 seconds 00:04:08.198 21:03:19 unittest.unittest_util -- unit/unittest.sh@142 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/dif.c/dif_ut 00:04:08.198 00:04:08.198 00:04:08.198 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.198 http://cunit.sourceforge.net/ 00:04:08.198 00:04:08.198 00:04:08.198 Suite: dif 00:04:08.198 Test: dif_generate_and_verify_test ...[2024-07-14 21:03:19.490067] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:04:08.198 [2024-07-14 21:03:19.490276] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:04:08.198 [2024-07-14 21:03:19.490320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=23, Expected=17, Actual=16 00:04:08.198 [2024-07-14 21:03:19.490359] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:04:08.198 [2024-07-14 21:03:19.490397] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:04:08.198 [2024-07-14 21:03:19.490435] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=23, Actual=22 00:04:08.198 passed 00:04:08.198 Test: dif_disable_check_test ...[2024-07-14 21:03:19.490568] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:04:08.198 [2024-07-14 21:03:19.490606] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:04:08.198 [2024-07-14 21:03:19.490644] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=22, Expected=22, Actual=ffff 00:04:08.198 passed 00:04:08.198 Test: dif_generate_and_verify_different_pi_formats_test ...[2024-07-14 21:03:19.490775] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a80000, Actual=b9848de 00:04:08.198 [2024-07-14 21:03:19.490815] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b98, Actual=b0a8 00:04:08.199 [2024-07-14 21:03:19.490854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b0a8000000000000, Actual=81039fcf5685d8d4 00:04:08.199 [2024-07-14 21:03:19.490893] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=12, Expected=b9848de00000000, Actual=81039fcf5685d8d4 00:04:08.199 [2024-07-14 21:03:19.490931] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:04:08.199 [2024-07-14 21:03:19.490969] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:04:08.199 [2024-07-14 21:03:19.491007] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:04:08.199 [2024-07-14 21:03:19.491045] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=17, Actual=0 00:04:08.199 [2024-07-14 21:03:19.491083] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:04:08.199 [2024-07-14 21:03:19.491122] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:04:08.199 [2024-07-14 21:03:19.491159] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=12, Expected=c, Actual=0 00:04:08.199 passed 00:04:08.199 Test: dif_apptag_mask_test ...passed 00:04:08.199 Test: dif_sec_512_md_0_error_test ...passed 00:04:08.199 Test: dif_sec_4096_md_0_error_test ...passed 00:04:08.199 Test: dif_sec_4100_md_128_error_test ...passed 00:04:08.199 Test: dif_guard_seed_test ...passed 00:04:08.199 Test: dif_guard_value_test ...passed 00:04:08.199 Test: dif_disable_sec_512_md_8_single_iov_test ...[2024-07-14 21:03:19.491199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:04:08.199 [2024-07-14 21:03:19.491238] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=12, Expected=1256, Actual=1234 00:04:08.199 [2024-07-14 21:03:19.491263] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:04:08.199 [2024-07-14 21:03:19.491273] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:04:08.199 [2024-07-14 21:03:19.491287] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:04:08.199 [2024-07-14 21:03:19.491298] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:04:08.199 [2024-07-14 21:03:19.491306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 528:spdk_dif_ctx_init: *ERROR*: Zero block size is not allowed and should be a multiple of 4kB 00:04:08.199 passed 00:04:08.199 Test: dif_sec_512_md_8_prchk_0_single_iov_test ...passed 00:04:08.199 Test: dif_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:04:08.199 Test: dif_sec_512_md_8_prchk_0_1_2_4_multi_iovs_test ...passed 00:04:08.199 Test: dif_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:04:08.199 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_test ...passed 00:04:08.199 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:04:08.199 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_and_md_test ...passed 00:04:08.199 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_data_test ...passed 00:04:08.199 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:04:08.199 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_guard_test ...passed 00:04:08.199 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_guard_test ...passed 00:04:08.199 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_apptag_test ...passed 00:04:08.199 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_apptag_test ...passed 00:04:08.199 Test: dif_sec_512_md_8_prchk_7_multi_iovs_split_reftag_test ...passed 00:04:08.199 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_split_reftag_test ...passed 00:04:08.199 Test: dif_sec_512_md_8_prchk_7_multi_iovs_complex_splits_test ...passed 00:04:08.199 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:04:08.199 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-14 21:03:19.496833] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=bd4c, Actual=fd4c 00:04:08.199 [2024-07-14 21:03:19.497156] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=be21, Actual=fe21 00:04:08.199 [2024-07-14 21:03:19.497470] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.199 [2024-07-14 21:03:19.497783] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.199 [2024-07-14 21:03:19.498096] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.199 [2024-07-14 21:03:19.498407] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.199 [2024-07-14 21:03:19.498723] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=9fbf 00:04:08.199 [2024-07-14 21:03:19.498925] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fe21, Actual=2d4e 00:04:08.199 [2024-07-14 21:03:19.499128] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=5ab753ed, Actual=1ab753ed 00:04:08.199 [2024-07-14 21:03:19.499434] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=78574660, Actual=38574660 00:04:08.199 [2024-07-14 21:03:19.499741] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.199 [2024-07-14 21:03:19.500050] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.199 [2024-07-14 21:03:19.500361] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.199 [2024-07-14 21:03:19.500679] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.199 [2024-07-14 21:03:19.500989] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=6445bb6a 00:04:08.199 [2024-07-14 21:03:19.501191] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=38574660, Actual=5a35dccd 00:04:08.199 [2024-07-14 21:03:19.501391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a772cecc20d3, Actual=a576a7728ecc20d3 00:04:08.199 [2024-07-14 21:03:19.501701] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a2d0837a266, Actual=88010a2d4837a266 00:04:08.199 [2024-07-14 21:03:19.502011] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.199 [2024-07-14 21:03:19.502320] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.199 [2024-07-14 21:03:19.502630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.199 [2024-07-14 21:03:19.502940] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.199 [2024-07-14 21:03:19.503250] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=57ca2ffeed1e18c8 00:04:08.199 [2024-07-14 21:03:19.503451] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=88010a2d4837a266, Actual=117237b67dbdf293 00:04:08.199 passed 00:04:08.199 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_and_md_test ...[2024-07-14 21:03:19.503519] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:04:08.199 [2024-07-14 21:03:19.503561] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:04:08.199 [2024-07-14 21:03:19.503602] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.199 [2024-07-14 21:03:19.503643] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.199 [2024-07-14 21:03:19.503684] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.199 [2024-07-14 21:03:19.503725] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.199 [2024-07-14 21:03:19.503765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9fbf 00:04:08.199 [2024-07-14 21:03:19.503797] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2d4e 00:04:08.199 [2024-07-14 21:03:19.503828] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:04:08.199 [2024-07-14 21:03:19.503869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:04:08.199 [2024-07-14 21:03:19.503909] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.199 [2024-07-14 21:03:19.503951] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.199 [2024-07-14 21:03:19.503992] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.199 [2024-07-14 21:03:19.504032] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.199 [2024-07-14 21:03:19.504073] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6445bb6a 00:04:08.199 [2024-07-14 21:03:19.504104] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=5a35dccd 00:04:08.199 [2024-07-14 21:03:19.504135] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772cecc20d3, Actual=a576a7728ecc20d3 00:04:08.199 [2024-07-14 21:03:19.504182] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d0837a266, Actual=88010a2d4837a266 00:04:08.199 [2024-07-14 21:03:19.504223] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.199 [2024-07-14 21:03:19.504264] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.199 [2024-07-14 21:03:19.504305] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 [2024-07-14 21:03:19.504345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 passed 00:04:08.200 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_data_test ...[2024-07-14 21:03:19.504386] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=57ca2ffeed1e18c8 00:04:08.200 [2024-07-14 21:03:19.504417] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=117237b67dbdf293 00:04:08.200 [2024-07-14 21:03:19.504451] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:04:08.200 [2024-07-14 21:03:19.504492] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:04:08.200 [2024-07-14 21:03:19.504540] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.200 [2024-07-14 21:03:19.504581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.200 [2024-07-14 21:03:19.504622] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 [2024-07-14 21:03:19.504662] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 [2024-07-14 21:03:19.504703] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9fbf 00:04:08.200 [2024-07-14 21:03:19.504734] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2d4e 00:04:08.200 [2024-07-14 21:03:19.504765] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:04:08.200 [2024-07-14 21:03:19.504806] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:04:08.200 [2024-07-14 21:03:19.504846] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.200 [2024-07-14 21:03:19.504887] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.200 [2024-07-14 21:03:19.504928] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 [2024-07-14 21:03:19.504968] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 [2024-07-14 21:03:19.505009] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6445bb6a 00:04:08.200 [2024-07-14 21:03:19.505040] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=5a35dccd 00:04:08.200 [2024-07-14 21:03:19.505071] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772cecc20d3, Actual=a576a7728ecc20d3 00:04:08.200 [2024-07-14 21:03:19.505112] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d0837a266, Actual=88010a2d4837a266 00:04:08.200 [2024-07-14 21:03:19.505152] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.200 [2024-07-14 21:03:19.505193] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.200 [2024-07-14 21:03:19.505234] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 [2024-07-14 21:03:19.505274] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 passed 00:04:08.200 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_guard_test ...[2024-07-14 21:03:19.505315] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=57ca2ffeed1e18c8 00:04:08.200 [2024-07-14 21:03:19.505345] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=117237b67dbdf293 00:04:08.200 [2024-07-14 21:03:19.505379] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:04:08.200 [2024-07-14 21:03:19.505426] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:04:08.200 [2024-07-14 21:03:19.505468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.200 [2024-07-14 21:03:19.505508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.200 [2024-07-14 21:03:19.505549] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 [2024-07-14 21:03:19.505589] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 [2024-07-14 21:03:19.505630] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9fbf 00:04:08.200 [2024-07-14 21:03:19.505661] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2d4e 00:04:08.200 [2024-07-14 21:03:19.505692] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:04:08.200 [2024-07-14 21:03:19.505732] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:04:08.200 [2024-07-14 21:03:19.505773] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.200 [2024-07-14 21:03:19.505814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.200 [2024-07-14 21:03:19.505854] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 [2024-07-14 21:03:19.505895] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 [2024-07-14 21:03:19.505936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6445bb6a 00:04:08.200 [2024-07-14 21:03:19.505966] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=5a35dccd 00:04:08.200 [2024-07-14 21:03:19.505998] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772cecc20d3, Actual=a576a7728ecc20d3 00:04:08.200 [2024-07-14 21:03:19.506038] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d0837a266, Actual=88010a2d4837a266 00:04:08.200 [2024-07-14 21:03:19.506079] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.200 [2024-07-14 21:03:19.506120] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.200 [2024-07-14 21:03:19.506160] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 [2024-07-14 21:03:19.506201] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 [2024-07-14 21:03:19.506241] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=57ca2ffeed1e18c8 00:04:08.200 passed 00:04:08.200 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_pi_16_test ...[2024-07-14 21:03:19.506272] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=117237b67dbdf293 00:04:08.200 [2024-07-14 21:03:19.506306] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:04:08.200 [2024-07-14 21:03:19.506347] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:04:08.200 [2024-07-14 21:03:19.506388] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.200 [2024-07-14 21:03:19.506428] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.200 [2024-07-14 21:03:19.506468] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 passed 00:04:08.200 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_apptag_test ...[2024-07-14 21:03:19.506509] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.200 [2024-07-14 21:03:19.506559] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9fbf 00:04:08.200 [2024-07-14 21:03:19.506590] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2d4e 00:04:08.200 [2024-07-14 21:03:19.506629] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:04:08.200 [2024-07-14 21:03:19.506671] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:04:08.201 [2024-07-14 21:03:19.506712] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.201 [2024-07-14 21:03:19.506752] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.201 [2024-07-14 21:03:19.506793] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.201 [2024-07-14 21:03:19.506834] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.201 [2024-07-14 21:03:19.506874] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6445bb6a 00:04:08.201 [2024-07-14 21:03:19.506905] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=5a35dccd 00:04:08.201 [2024-07-14 21:03:19.506936] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772cecc20d3, Actual=a576a7728ecc20d3 00:04:08.201 [2024-07-14 21:03:19.506976] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d0837a266, Actual=88010a2d4837a266 00:04:08.201 [2024-07-14 21:03:19.507017] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.201 [2024-07-14 21:03:19.507057] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.201 [2024-07-14 21:03:19.507098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.201 [2024-07-14 21:03:19.507138] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.201 [2024-07-14 21:03:19.507179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=57ca2ffeed1e18c8 00:04:08.201 passed 00:04:08.201 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_pi_16_test ...[2024-07-14 21:03:19.507210] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=117237b67dbdf293 00:04:08.201 [2024-07-14 21:03:19.507243] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:04:08.201 [2024-07-14 21:03:19.507284] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=be21, Actual=fe21 00:04:08.201 [2024-07-14 21:03:19.507324] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.201 [2024-07-14 21:03:19.507365] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.201 [2024-07-14 21:03:19.507405] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.201 passed 00:04:08.201 Test: dif_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_reftag_test ...[2024-07-14 21:03:19.507446] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.201 [2024-07-14 21:03:19.507486] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9fbf 00:04:08.201 [2024-07-14 21:03:19.507517] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fe21, Actual=2d4e 00:04:08.201 [2024-07-14 21:03:19.507551] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:04:08.201 [2024-07-14 21:03:19.507591] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=78574660, Actual=38574660 00:04:08.201 [2024-07-14 21:03:19.507632] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.201 [2024-07-14 21:03:19.507672] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.201 [2024-07-14 21:03:19.507713] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.201 [2024-07-14 21:03:19.507753] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.201 [2024-07-14 21:03:19.507794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6445bb6a 00:04:08.201 [2024-07-14 21:03:19.507825] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38574660, Actual=5a35dccd 00:04:08.201 [2024-07-14 21:03:19.507856] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772cecc20d3, Actual=a576a7728ecc20d3 00:04:08.201 [2024-07-14 21:03:19.507896] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d0837a266, Actual=88010a2d4837a266 00:04:08.201 [2024-07-14 21:03:19.507937] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.201 [2024-07-14 21:03:19.507977] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.201 [2024-07-14 21:03:19.508018] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.201 [2024-07-14 21:03:19.508058] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.201 [2024-07-14 21:03:19.508099] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=57ca2ffeed1e18c8 00:04:08.201 passed 00:04:08.201 Test: dif_copy_sec_512_md_8_prchk_0_single_iov ...passed 00:04:08.201 Test: dif_copy_sec_4096_md_128_prchk_0_single_iov_test ...[2024-07-14 21:03:19.508129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=88010a2d4837a266, Actual=117237b67dbdf293 00:04:08.201 passed 00:04:08.201 Test: dif_copy_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:04:08.201 Test: dif_copy_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:04:08.201 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:04:08.201 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:04:08.201 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:04:08.201 Test: dif_copy_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:04:08.201 Test: dif_copy_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:04:08.201 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-14 21:03:19.513735] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=bd4c, Actual=fd4c 00:04:08.201 [2024-07-14 21:03:19.513915] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=195c, Actual=595c 00:04:08.201 [2024-07-14 21:03:19.514090] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.201 [2024-07-14 21:03:19.514265] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.201 [2024-07-14 21:03:19.514437] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.201 [2024-07-14 21:03:19.514610] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.201 [2024-07-14 21:03:19.514781] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=9fbf 00:04:08.201 [2024-07-14 21:03:19.514956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=a62 00:04:08.201 [2024-07-14 21:03:19.515129] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=5ab753ed, Actual=1ab753ed 00:04:08.201 [2024-07-14 21:03:19.515302] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=da6c3744, Actual=9a6c3744 00:04:08.201 [2024-07-14 21:03:19.515481] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.201 [2024-07-14 21:03:19.515654] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.201 [2024-07-14 21:03:19.515827] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.201 [2024-07-14 21:03:19.515999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.201 [2024-07-14 21:03:19.516179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=6445bb6a 00:04:08.201 [2024-07-14 21:03:19.516376] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=6af2fb83 00:04:08.201 [2024-07-14 21:03:19.516608] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a772cecc20d3, Actual=a576a7728ecc20d3 00:04:08.201 [2024-07-14 21:03:19.516813] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=ffd0089209689bec, Actual=ffd0089249689bec 00:04:08.202 [2024-07-14 21:03:19.517016] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.517202] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.517380] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.202 [2024-07-14 21:03:19.517554] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.202 [2024-07-14 21:03:19.517728] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=57ca2ffeed1e18c8 00:04:08.202 [2024-07-14 21:03:19.517901] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=418883605c6c6bcd 00:04:08.202 passed 00:04:08.202 Test: dif_copy_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-14 21:03:19.517956] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:04:08.202 [2024-07-14 21:03:19.517999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed47, Actual=ad47 00:04:08.202 [2024-07-14 21:03:19.518042] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.518087] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.518139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.202 [2024-07-14 21:03:19.518183] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.202 [2024-07-14 21:03:19.518225] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9fbf 00:04:08.202 [2024-07-14 21:03:19.518267] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=fe79 00:04:08.202 [2024-07-14 21:03:19.518311] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:04:08.202 [2024-07-14 21:03:19.518354] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=385a16c6, Actual=785a16c6 00:04:08.202 [2024-07-14 21:03:19.518396] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.518438] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.518480] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.202 [2024-07-14 21:03:19.518523] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.202 [2024-07-14 21:03:19.518565] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6445bb6a 00:04:08.202 [2024-07-14 21:03:19.518607] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=88c4da01 00:04:08.202 [2024-07-14 21:03:19.518650] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772cecc20d3, Actual=a576a7728ecc20d3 00:04:08.202 [2024-07-14 21:03:19.518693] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1f4d9c7236549076, Actual=1f4d9c7276549076 00:04:08.202 [2024-07-14 21:03:19.518736] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.518778] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.518821] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.202 [2024-07-14 21:03:19.518863] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.202 passed 00:04:08.202 Test: dix_sec_512_md_0_error ...passed 00:04:08.202 Test: dix_sec_512_md_8_prchk_0_single_iov ...[2024-07-14 21:03:19.518906] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=57ca2ffeed1e18c8 00:04:08.202 [2024-07-14 21:03:19.518949] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=a115178063506057 00:04:08.202 [2024-07-14 21:03:19.518959] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 510:spdk_dif_ctx_init: *ERROR*: Metadata size is smaller than DIF size. 00:04:08.202 passed 00:04:08.202 Test: dix_sec_4096_md_128_prchk_0_single_iov_test ...passed 00:04:08.202 Test: dix_sec_512_md_8_prchk_0_1_2_4_multi_iovs ...passed 00:04:08.202 Test: dix_sec_4096_md_128_prchk_0_1_2_4_multi_iovs_test ...passed 00:04:08.202 Test: dix_sec_4096_md_128_prchk_7_multi_iovs ...passed 00:04:08.202 Test: dix_sec_512_md_8_prchk_7_multi_iovs_split_data ...passed 00:04:08.202 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_split_data_test ...passed 00:04:08.202 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits ...passed 00:04:08.202 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_test ...passed 00:04:08.202 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_test ...[2024-07-14 21:03:19.524619] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=bd4c, Actual=fd4c 00:04:08.202 [2024-07-14 21:03:19.524799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=195c, Actual=595c 00:04:08.202 [2024-07-14 21:03:19.524967] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.525139] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.525322] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.202 [2024-07-14 21:03:19.525508] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.202 [2024-07-14 21:03:19.525696] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=fd4c, Actual=9fbf 00:04:08.202 [2024-07-14 21:03:19.525869] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d90d, Actual=a62 00:04:08.202 [2024-07-14 21:03:19.526040] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=5ab753ed, Actual=1ab753ed 00:04:08.202 [2024-07-14 21:03:19.526215] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=da6c3744, Actual=9a6c3744 00:04:08.202 [2024-07-14 21:03:19.526391] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.526581] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.526780] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.202 [2024-07-14 21:03:19.526978] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.202 [2024-07-14 21:03:19.527179] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=1ab753ed, Actual=6445bb6a 00:04:08.202 [2024-07-14 21:03:19.527383] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=890612e, Actual=6af2fb83 00:04:08.202 [2024-07-14 21:03:19.527586] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a772cecc20d3, Actual=a576a7728ecc20d3 00:04:08.202 [2024-07-14 21:03:19.527799] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=ffd0089209689bec, Actual=ffd0089249689bec 00:04:08.202 [2024-07-14 21:03:19.527999] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.528199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=90, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.528400] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.202 [2024-07-14 21:03:19.528614] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=90, Expected=5a, Actual=4000005a 00:04:08.202 [2024-07-14 21:03:19.528814] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=a576a7728ecc20d3, Actual=57ca2ffeed1e18c8 00:04:08.202 [2024-07-14 21:03:19.529010] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=90, Expected=d8fbbefb69e63b38, Actual=418883605c6c6bcd 00:04:08.202 passed 00:04:08.202 Test: dix_sec_4096_md_128_inject_1_2_4_8_multi_iovs_split_test ...[2024-07-14 21:03:19.529081] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=bd4c, Actual=fd4c 00:04:08.202 [2024-07-14 21:03:19.529143] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=ed47, Actual=ad47 00:04:08.202 [2024-07-14 21:03:19.529199] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.529244] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.202 [2024-07-14 21:03:19.529286] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.203 [2024-07-14 21:03:19.529329] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.203 [2024-07-14 21:03:19.529372] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=fd4c, Actual=9fbf 00:04:08.203 [2024-07-14 21:03:19.529415] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=2d16, Actual=fe79 00:04:08.203 [2024-07-14 21:03:19.529458] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=5ab753ed, Actual=1ab753ed 00:04:08.203 [2024-07-14 21:03:19.529500] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=385a16c6, Actual=785a16c6 00:04:08.203 [2024-07-14 21:03:19.529542] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.203 [2024-07-14 21:03:19.529584] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.203 [2024-07-14 21:03:19.529626] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.203 [2024-07-14 21:03:19.529668] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.203 [2024-07-14 21:03:19.529709] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1ab753ed, Actual=6445bb6a 00:04:08.203 [2024-07-14 21:03:19.529751] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=eaa640ac, Actual=88c4da01 00:04:08.203 [2024-07-14 21:03:19.529794] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a772cecc20d3, Actual=a576a7728ecc20d3 00:04:08.203 [2024-07-14 21:03:19.529836] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=1f4d9c7236549076, Actual=1f4d9c7276549076 00:04:08.203 [2024-07-14 21:03:19.529879] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.203 [2024-07-14 21:03:19.529921] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=88, Expected=88, Actual=4088 00:04:08.203 [2024-07-14 21:03:19.529970] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.203 [2024-07-14 21:03:19.530013] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=88, Expected=58, Actual=40000058 00:04:08.203 passed 00:04:08.203 Test: set_md_interleave_iovs_test ...[2024-07-14 21:03:19.530055] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=a576a7728ecc20d3, Actual=57ca2ffeed1e18c8 00:04:08.203 [2024-07-14 21:03:19.530098] /home/vagrant/spdk_repo/spdk/lib/util/dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=88, Expected=38662a1b56da30a2, Actual=a115178063506057 00:04:08.203 passed 00:04:08.203 Test: set_md_interleave_iovs_split_test ...passed 00:04:08.203 Test: dif_generate_stream_pi_16_test ...passed 00:04:08.203 Test: dif_generate_stream_test ...passed 00:04:08.203 Test: set_md_interleave_iovs_alignment_test ...[2024-07-14 21:03:19.530953] /home/vagrant/spdk_repo/spdk/lib/util/dif.c:1822:spdk_dif_set_md_interleave_iovs: *ERROR*: Buffer overflow will occur. 00:04:08.203 passed 00:04:08.203 Test: dif_generate_split_test ...passed 00:04:08.203 Test: set_md_interleave_iovs_multi_segments_test ...passed 00:04:08.203 Test: dif_verify_split_test ...passed 00:04:08.203 Test: dif_verify_stream_multi_segments_test ...passed 00:04:08.203 Test: update_crc32c_pi_16_test ...passed 00:04:08.203 Test: update_crc32c_test ...passed 00:04:08.203 Test: dif_update_crc32c_split_test ...passed 00:04:08.203 Test: dif_update_crc32c_stream_multi_segments_test ...passed 00:04:08.203 Test: get_range_with_md_test ...passed 00:04:08.203 Test: dif_sec_512_md_8_prchk_7_multi_iovs_remap_pi_16_test ...passed 00:04:08.203 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_remap_test ...passed 00:04:08.203 Test: dif_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:04:08.203 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_remap ...passed 00:04:08.203 Test: dix_sec_512_md_8_prchk_7_multi_iovs_complex_splits_remap_pi_16_test ...passed 00:04:08.203 Test: dix_sec_4096_md_128_prchk_7_multi_iovs_complex_splits_remap_test ...passed 00:04:08.203 Test: dif_generate_and_verify_unmap_test ...passed 00:04:08.203 00:04:08.203 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.203 suites 1 1 n/a 0 0 00:04:08.203 tests 79 79 79 0 0 00:04:08.203 asserts 3584 3584 3584 0 n/a 00:04:08.203 00:04:08.203 Elapsed time = 0.039 seconds 00:04:08.203 21:03:19 unittest.unittest_util -- unit/unittest.sh@143 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/iov.c/iov_ut 00:04:08.203 00:04:08.203 00:04:08.203 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.203 http://cunit.sourceforge.net/ 00:04:08.203 00:04:08.203 00:04:08.203 Suite: iov 00:04:08.203 Test: test_single_iov ...passed 00:04:08.203 Test: test_simple_iov ...passed 00:04:08.203 Test: test_complex_iov ...passed 00:04:08.203 Test: test_iovs_to_buf ...passed 00:04:08.203 Test: test_buf_to_iovs ...passed 00:04:08.203 Test: test_memset ...passed 00:04:08.203 Test: test_iov_one ...passed 00:04:08.203 Test: test_iov_xfer ...passed 00:04:08.203 00:04:08.203 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.203 suites 1 1 n/a 0 0 00:04:08.203 tests 8 8 8 0 0 00:04:08.203 asserts 156 156 156 0 n/a 00:04:08.203 00:04:08.203 Elapsed time = 0.000 seconds 00:04:08.203 21:03:19 unittest.unittest_util -- unit/unittest.sh@144 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/math.c/math_ut 00:04:08.203 00:04:08.203 00:04:08.203 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.203 http://cunit.sourceforge.net/ 00:04:08.203 00:04:08.203 00:04:08.203 Suite: math 00:04:08.203 Test: test_serial_number_arithmetic ...passed 00:04:08.203 Suite: erase 00:04:08.203 Test: test_memset_s ...passed 00:04:08.203 00:04:08.203 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.203 suites 2 2 n/a 0 0 00:04:08.203 tests 2 2 2 0 0 00:04:08.203 asserts 18 18 18 0 n/a 00:04:08.203 00:04:08.203 Elapsed time = 0.000 seconds 00:04:08.203 21:03:19 unittest.unittest_util -- unit/unittest.sh@145 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/pipe.c/pipe_ut 00:04:08.203 00:04:08.203 00:04:08.203 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.203 http://cunit.sourceforge.net/ 00:04:08.204 00:04:08.204 00:04:08.204 Suite: pipe 00:04:08.204 Test: test_create_destroy ...passed 00:04:08.204 Test: test_write_get_buffer ...passed 00:04:08.204 Test: test_write_advance ...passed 00:04:08.204 Test: test_read_get_buffer ...passed 00:04:08.204 Test: test_read_advance ...passed 00:04:08.204 Test: test_data ...passed 00:04:08.204 00:04:08.204 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.204 suites 1 1 n/a 0 0 00:04:08.204 tests 6 6 6 0 0 00:04:08.204 asserts 251 251 251 0 n/a 00:04:08.204 00:04:08.204 Elapsed time = 0.000 seconds 00:04:08.204 21:03:19 unittest.unittest_util -- unit/unittest.sh@146 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/util/xor.c/xor_ut 00:04:08.204 00:04:08.204 00:04:08.204 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.204 http://cunit.sourceforge.net/ 00:04:08.204 00:04:08.204 00:04:08.204 Suite: xor 00:04:08.204 Test: test_xor_gen ...passed 00:04:08.204 00:04:08.204 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.204 suites 1 1 n/a 0 0 00:04:08.204 tests 1 1 1 0 0 00:04:08.204 asserts 17 17 17 0 n/a 00:04:08.204 00:04:08.204 Elapsed time = 0.000 seconds 00:04:08.204 00:04:08.204 real 0m0.115s 00:04:08.204 user 0m0.086s 00:04:08.204 sys 0m0.055s 00:04:08.204 21:03:19 unittest.unittest_util -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.204 ************************************ 00:04:08.204 END TEST unittest_util 00:04:08.204 ************************************ 00:04:08.204 21:03:19 unittest.unittest_util -- common/autotest_common.sh@10 -- # set +x 00:04:08.204 21:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:08.204 21:03:19 unittest -- unit/unittest.sh@284 -- # grep -q '#define SPDK_CONFIG_VHOST 1' /home/vagrant/spdk_repo/spdk/include/spdk/config.h 00:04:08.204 21:03:19 unittest -- unit/unittest.sh@287 -- # run_test unittest_dma /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:04:08.204 21:03:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.204 21:03:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.204 21:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:08.204 ************************************ 00:04:08.204 START TEST unittest_dma 00:04:08.204 ************************************ 00:04:08.204 21:03:19 unittest.unittest_dma -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/dma/dma.c/dma_ut 00:04:08.204 00:04:08.204 00:04:08.204 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.204 http://cunit.sourceforge.net/ 00:04:08.204 00:04:08.204 00:04:08.204 Suite: dma_suite 00:04:08.204 Test: test_dma ...[2024-07-14 21:03:19.604564] /home/vagrant/spdk_repo/spdk/lib/dma/dma.c: 56:spdk_memory_domain_create: *ERROR*: Context size can't be 0 00:04:08.204 passed 00:04:08.204 00:04:08.204 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.204 suites 1 1 n/a 0 0 00:04:08.204 tests 1 1 1 0 0 00:04:08.204 asserts 54 54 54 0 n/a 00:04:08.204 00:04:08.204 Elapsed time = 0.000 seconds 00:04:08.204 00:04:08.204 real 0m0.005s 00:04:08.204 user 0m0.000s 00:04:08.204 sys 0m0.004s 00:04:08.204 21:03:19 unittest.unittest_dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.204 21:03:19 unittest.unittest_dma -- common/autotest_common.sh@10 -- # set +x 00:04:08.204 ************************************ 00:04:08.204 END TEST unittest_dma 00:04:08.204 ************************************ 00:04:08.204 21:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:08.204 21:03:19 unittest -- unit/unittest.sh@289 -- # run_test unittest_init unittest_init 00:04:08.204 21:03:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.204 21:03:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.204 21:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:08.204 ************************************ 00:04:08.204 START TEST unittest_init 00:04:08.204 ************************************ 00:04:08.204 21:03:19 unittest.unittest_init -- common/autotest_common.sh@1123 -- # unittest_init 00:04:08.204 21:03:19 unittest.unittest_init -- unit/unittest.sh@150 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/init/subsystem.c/subsystem_ut 00:04:08.204 00:04:08.204 00:04:08.204 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.204 http://cunit.sourceforge.net/ 00:04:08.204 00:04:08.204 00:04:08.204 Suite: subsystem_suite 00:04:08.204 Test: subsystem_sort_test_depends_on_single ...passed 00:04:08.204 Test: subsystem_sort_test_depends_on_multiple ...passed 00:04:08.204 Test: subsystem_sort_test_missing_dependency ...passed 00:04:08.204 00:04:08.204 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.204 suites 1 1 n/a 0 0 00:04:08.204 tests 3 3 3 0 0 00:04:08.204 asserts 20 20 20 0 n/a 00:04:08.204 00:04:08.204 Elapsed time = 0.000 seconds 00:04:08.204 [2024-07-14 21:03:19.645743] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 197:spdk_subsystem_init: *ERROR*: subsystem A dependency B is missing 00:04:08.204 [2024-07-14 21:03:19.645927] /home/vagrant/spdk_repo/spdk/lib/init/subsystem.c: 191:spdk_subsystem_init: *ERROR*: subsystem C is missing 00:04:08.204 00:04:08.204 real 0m0.005s 00:04:08.204 user 0m0.000s 00:04:08.204 sys 0m0.008s 00:04:08.204 21:03:19 unittest.unittest_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.204 21:03:19 unittest.unittest_init -- common/autotest_common.sh@10 -- # set +x 00:04:08.204 ************************************ 00:04:08.204 END TEST unittest_init 00:04:08.204 ************************************ 00:04:08.204 21:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:08.204 21:03:19 unittest -- unit/unittest.sh@290 -- # run_test unittest_keyring /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:04:08.204 21:03:19 unittest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.204 21:03:19 unittest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.204 21:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:08.204 ************************************ 00:04:08.204 START TEST unittest_keyring 00:04:08.204 ************************************ 00:04:08.204 21:03:19 unittest.unittest_keyring -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/unit/lib/keyring/keyring.c/keyring_ut 00:04:08.204 00:04:08.204 00:04:08.204 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.204 http://cunit.sourceforge.net/ 00:04:08.204 00:04:08.204 00:04:08.204 Suite: keyring 00:04:08.204 Test: test_keyring_add_remove ...passed 00:04:08.204 Test: test_keyring_get_put ...passed 00:04:08.204 00:04:08.204 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.204 suites 1 1 n/a 0 0 00:04:08.204 tests 2 2 2 0 0 00:04:08.204 asserts 44 44 44 0 n/a 00:04:08.204 00:04:08.204 Elapsed time = 0.000 seconds 00:04:08.204 [2024-07-14 21:03:19.690090] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key 'key0' already exists 00:04:08.204 [2024-07-14 21:03:19.690256] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 107:spdk_keyring_add_key: *ERROR*: Key ':key0' already exists 00:04:08.205 [2024-07-14 21:03:19.690283] /home/vagrant/spdk_repo/spdk/lib/keyring/keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:04:08.205 00:04:08.205 real 0m0.004s 00:04:08.205 user 0m0.000s 00:04:08.205 sys 0m0.003s 00:04:08.205 21:03:19 unittest.unittest_keyring -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.205 21:03:19 unittest.unittest_keyring -- common/autotest_common.sh@10 -- # set +x 00:04:08.205 ************************************ 00:04:08.205 END TEST unittest_keyring 00:04:08.205 ************************************ 00:04:08.205 21:03:19 unittest -- common/autotest_common.sh@1142 -- # return 0 00:04:08.205 21:03:19 unittest -- unit/unittest.sh@292 -- # '[' no = yes ']' 00:04:08.205 21:03:19 unittest -- unit/unittest.sh@305 -- # set +x 00:04:08.205 00:04:08.205 00:04:08.205 ===================== 00:04:08.205 All unit tests passed 00:04:08.205 ===================== 00:04:08.205 WARN: lcov not installed or SPDK built without coverage! 00:04:08.205 WARN: neither valgrind nor ASAN is enabled! 00:04:08.205 00:04:08.205 00:04:08.205 00:04:08.205 real 0m14.349s 00:04:08.205 user 0m11.532s 00:04:08.205 sys 0m1.450s 00:04:08.205 21:03:19 unittest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.205 21:03:19 unittest -- common/autotest_common.sh@10 -- # set +x 00:04:08.205 ************************************ 00:04:08.205 END TEST unittest 00:04:08.205 ************************************ 00:04:08.464 21:03:19 -- common/autotest_common.sh@1142 -- # return 0 00:04:08.464 21:03:19 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:08.464 21:03:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:08.464 21:03:19 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:08.464 21:03:19 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:08.464 21:03:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:08.464 21:03:19 -- common/autotest_common.sh@10 -- # set +x 00:04:08.464 21:03:19 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:08.464 21:03:19 -- spdk/autotest.sh@168 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:08.464 21:03:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.464 21:03:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.464 21:03:19 -- common/autotest_common.sh@10 -- # set +x 00:04:08.464 ************************************ 00:04:08.464 START TEST env 00:04:08.464 ************************************ 00:04:08.464 21:03:19 env -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:08.464 * Looking for test storage... 00:04:08.464 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:08.464 21:03:19 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:08.464 21:03:19 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.464 21:03:19 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.464 21:03:19 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.464 ************************************ 00:04:08.464 START TEST env_memory 00:04:08.464 ************************************ 00:04:08.464 21:03:19 env.env_memory -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:08.464 00:04:08.464 00:04:08.464 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.464 http://cunit.sourceforge.net/ 00:04:08.464 00:04:08.464 00:04:08.464 Suite: memory 00:04:08.464 Test: alloc and free memory map ...[2024-07-14 21:03:19.945350] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:08.464 passed 00:04:08.464 Test: mem map translation ...[2024-07-14 21:03:19.956695] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:08.464 [2024-07-14 21:03:19.956767] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 591:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:08.464 [2024-07-14 21:03:19.956802] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:08.464 [2024-07-14 21:03:19.956820] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:08.464 passed 00:04:08.464 Test: mem map registration ...[2024-07-14 21:03:19.970066] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:08.464 [2024-07-14 21:03:19.970126] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:08.464 passed 00:04:08.464 Test: mem map adjacent registrations ...passed 00:04:08.464 00:04:08.464 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.464 suites 1 1 n/a 0 0 00:04:08.464 tests 4 4 4 0 0 00:04:08.464 asserts 152 152 152 0 n/a 00:04:08.464 00:04:08.464 Elapsed time = 0.055 seconds 00:04:08.464 00:04:08.464 real 0m0.060s 00:04:08.464 user 0m0.060s 00:04:08.464 sys 0m0.004s 00:04:08.464 21:03:19 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:08.464 21:03:19 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:08.464 ************************************ 00:04:08.464 END TEST env_memory 00:04:08.464 ************************************ 00:04:08.774 21:03:20 env -- common/autotest_common.sh@1142 -- # return 0 00:04:08.774 21:03:20 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:08.774 21:03:20 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:08.774 21:03:20 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:08.774 21:03:20 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.774 ************************************ 00:04:08.774 START TEST env_vtophys 00:04:08.774 ************************************ 00:04:08.774 21:03:20 env.env_vtophys -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:08.774 EAL: lib.eal log level changed from notice to debug 00:04:08.774 EAL: Sysctl reports 10 cpus 00:04:08.774 EAL: Detected lcore 0 as core 0 on socket 0 00:04:08.774 EAL: Detected lcore 1 as core 0 on socket 0 00:04:08.774 EAL: Detected lcore 2 as core 0 on socket 0 00:04:08.774 EAL: Detected lcore 3 as core 0 on socket 0 00:04:08.774 EAL: Detected lcore 4 as core 0 on socket 0 00:04:08.774 EAL: Detected lcore 5 as core 0 on socket 0 00:04:08.774 EAL: Detected lcore 6 as core 0 on socket 0 00:04:08.774 EAL: Detected lcore 7 as core 0 on socket 0 00:04:08.774 EAL: Detected lcore 8 as core 0 on socket 0 00:04:08.774 EAL: Detected lcore 9 as core 0 on socket 0 00:04:08.774 EAL: Maximum logical cores by configuration: 128 00:04:08.774 EAL: Detected CPU lcores: 10 00:04:08.774 EAL: Detected NUMA nodes: 1 00:04:08.774 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:08.774 EAL: Checking presence of .so 'librte_eal.so.24' 00:04:08.774 EAL: Checking presence of .so 'librte_eal.so' 00:04:08.774 EAL: Detected static linkage of DPDK 00:04:08.774 EAL: No shared files mode enabled, IPC will be disabled 00:04:08.774 EAL: PCI scan found 10 devices 00:04:08.774 EAL: Specific IOVA mode is not requested, autodetecting 00:04:08.774 EAL: Selecting IOVA mode according to bus requests 00:04:08.774 EAL: Bus pci wants IOVA as 'PA' 00:04:08.774 EAL: Selected IOVA mode 'PA' 00:04:08.774 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:04:08.774 EAL: Ask a virtual area of 0x2e000 bytes 00:04:08.774 EAL: WARNING! Base virtual address hint (0x1000005000 != 0x1000805000) not respected! 00:04:08.774 EAL: This may cause issues with mapping memory into secondary processes 00:04:08.774 EAL: Virtual area found at 0x1000805000 (size = 0x2e000) 00:04:08.774 EAL: Setting up physically contiguous memory... 00:04:08.774 EAL: Ask a virtual area of 0x1000 bytes 00:04:08.774 EAL: WARNING! Base virtual address hint (0x100000b000 != 0x1000aee000) not respected! 00:04:08.774 EAL: This may cause issues with mapping memory into secondary processes 00:04:08.774 EAL: Virtual area found at 0x1000aee000 (size = 0x1000) 00:04:08.774 EAL: Memseg list allocated at socket 0, page size 0x40000kB 00:04:08.774 EAL: Ask a virtual area of 0xf0000000 bytes 00:04:08.774 EAL: WARNING! Base virtual address hint (0x105000c000 != 0x1060000000) not respected! 00:04:08.774 EAL: This may cause issues with mapping memory into secondary processes 00:04:08.774 EAL: Virtual area found at 0x1060000000 (size = 0xf0000000) 00:04:08.774 EAL: VA reserved for memseg list at 0x1060000000, size f0000000 00:04:08.774 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x60000000, len 268435456 00:04:08.774 EAL: Mapped memory segment 1 @ 0x1080000000: physaddr:0x110000000, len 268435456 00:04:08.774 EAL: Mapped memory segment 2 @ 0x1070000000: physaddr:0x120000000, len 268435456 00:04:09.045 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x130000000, len 268435456 00:04:09.045 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x140000000, len 268435456 00:04:09.045 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x150000000, len 268435456 00:04:09.045 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x160000000, len 268435456 00:04:09.303 EAL: Mapped memory segment 7 @ 0x10e0000000: physaddr:0x180000000, len 268435456 00:04:09.303 EAL: No shared files mode enabled, IPC is disabled 00:04:09.303 EAL: Added 1792M to heap on socket 0 00:04:09.303 EAL: Added 256M to heap on socket 0 00:04:09.303 EAL: TSC is not safe to use in SMP mode 00:04:09.303 EAL: TSC is not invariant 00:04:09.303 EAL: TSC frequency is ~2199999 KHz 00:04:09.303 EAL: Main lcore 0 is ready (tid=1af87de12000;cpuset=[0]) 00:04:09.303 EAL: PCI scan found 10 devices 00:04:09.303 EAL: Registering mem event callbacks not supported 00:04:09.303 00:04:09.303 00:04:09.303 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.303 http://cunit.sourceforge.net/ 00:04:09.303 00:04:09.303 00:04:09.303 Suite: components_suite 00:04:09.303 Test: vtophys_malloc_test ...passed 00:04:09.560 Test: vtophys_spdk_malloc_test ...passed 00:04:09.560 00:04:09.560 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.560 suites 1 1 n/a 0 0 00:04:09.560 tests 2 2 2 0 0 00:04:09.560 asserts 497 497 497 0 n/a 00:04:09.560 00:04:09.560 Elapsed time = 0.367 seconds 00:04:09.560 00:04:09.560 real 0m0.975s 00:04:09.560 user 0m0.377s 00:04:09.560 sys 0m0.599s 00:04:09.560 21:03:21 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.560 21:03:21 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:09.560 ************************************ 00:04:09.560 END TEST env_vtophys 00:04:09.560 ************************************ 00:04:09.560 21:03:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:09.560 21:03:21 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:09.560 21:03:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:09.560 21:03:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.560 21:03:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.560 ************************************ 00:04:09.560 START TEST env_pci 00:04:09.560 ************************************ 00:04:09.560 21:03:21 env.env_pci -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:09.560 00:04:09.560 00:04:09.560 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.560 http://cunit.sourceforge.net/ 00:04:09.560 00:04:09.560 00:04:09.560 Suite: pci 00:04:09.560 Test: pci_hook ...passed 00:04:09.560 00:04:09.560 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.560 suites 1 1 n/a 0 0 00:04:09.560 tests 1 1 1 0 0 00:04:09.560 asserts 25 25 25 0 n/a 00:04:09.560 00:04:09.560 Elapsed time = 0.000 seconds 00:04:09.560 EAL: Cannot find device (10000:00:01.0) 00:04:09.560 EAL: Failed to attach device on primary process 00:04:09.560 00:04:09.560 real 0m0.007s 00:04:09.560 user 0m0.005s 00:04:09.560 sys 0m0.006s 00:04:09.560 21:03:21 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:09.560 ************************************ 00:04:09.560 END TEST env_pci 00:04:09.560 ************************************ 00:04:09.560 21:03:21 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:09.560 21:03:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:09.560 21:03:21 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:09.560 21:03:21 env -- env/env.sh@15 -- # uname 00:04:09.560 21:03:21 env -- env/env.sh@15 -- # '[' FreeBSD = Linux ']' 00:04:09.560 21:03:21 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:04:09.560 21:03:21 env -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:09.560 21:03:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:09.560 21:03:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.818 ************************************ 00:04:09.818 START TEST env_dpdk_post_init 00:04:09.818 ************************************ 00:04:09.818 21:03:21 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 00:04:09.818 EAL: Sysctl reports 10 cpus 00:04:09.818 EAL: Detected CPU lcores: 10 00:04:09.818 EAL: Detected NUMA nodes: 1 00:04:09.818 EAL: Detected static linkage of DPDK 00:04:09.818 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:09.818 EAL: Selected IOVA mode 'PA' 00:04:09.818 EAL: Contigmem driver has 8 buffers, each of size 256MB 00:04:09.818 EAL: Mapped memory segment 0 @ 0x1060000000: physaddr:0x60000000, len 268435456 00:04:09.818 EAL: Mapped memory segment 1 @ 0x1080000000: physaddr:0x110000000, len 268435456 00:04:09.818 EAL: Mapped memory segment 2 @ 0x1070000000: physaddr:0x120000000, len 268435456 00:04:10.078 EAL: Mapped memory segment 3 @ 0x1090000000: physaddr:0x130000000, len 268435456 00:04:10.078 EAL: Mapped memory segment 4 @ 0x10a0000000: physaddr:0x140000000, len 268435456 00:04:10.078 EAL: Mapped memory segment 5 @ 0x10b0000000: physaddr:0x150000000, len 268435456 00:04:10.078 EAL: Mapped memory segment 6 @ 0x10c0000000: physaddr:0x160000000, len 268435456 00:04:10.337 EAL: Mapped memory segment 7 @ 0x10e0000000: physaddr:0x180000000, len 268435456 00:04:10.337 EAL: TSC is not safe to use in SMP mode 00:04:10.337 EAL: TSC is not invariant 00:04:10.337 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:10.337 [2024-07-14 21:03:21.654695] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:04:10.337 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:10.337 Starting DPDK initialization... 00:04:10.337 Starting SPDK post initialization... 00:04:10.337 SPDK NVMe probe 00:04:10.337 Attaching to 0000:00:10.0 00:04:10.337 Attached to 0000:00:10.0 00:04:10.337 Cleaning up... 00:04:10.337 00:04:10.337 real 0m0.579s 00:04:10.337 user 0m0.014s 00:04:10.337 sys 0m0.559s 00:04:10.337 21:03:21 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.337 21:03:21 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:10.337 ************************************ 00:04:10.337 END TEST env_dpdk_post_init 00:04:10.337 ************************************ 00:04:10.337 21:03:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:10.337 21:03:21 env -- env/env.sh@26 -- # uname 00:04:10.337 21:03:21 env -- env/env.sh@26 -- # '[' FreeBSD = Linux ']' 00:04:10.337 00:04:10.337 real 0m1.955s 00:04:10.337 user 0m0.654s 00:04:10.337 sys 0m1.311s 00:04:10.337 21:03:21 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:10.337 ************************************ 00:04:10.337 END TEST env 00:04:10.337 21:03:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:10.337 ************************************ 00:04:10.337 21:03:21 -- common/autotest_common.sh@1142 -- # return 0 00:04:10.337 21:03:21 -- spdk/autotest.sh@169 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:10.337 21:03:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:10.337 21:03:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:10.337 21:03:21 -- common/autotest_common.sh@10 -- # set +x 00:04:10.337 ************************************ 00:04:10.337 START TEST rpc 00:04:10.337 ************************************ 00:04:10.337 21:03:21 rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:10.595 * Looking for test storage... 00:04:10.595 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:10.595 21:03:21 rpc -- rpc/rpc.sh@65 -- # spdk_pid=45490 00:04:10.595 21:03:21 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:10.595 21:03:21 rpc -- rpc/rpc.sh@67 -- # waitforlisten 45490 00:04:10.595 21:03:21 rpc -- common/autotest_common.sh@829 -- # '[' -z 45490 ']' 00:04:10.595 21:03:21 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:10.595 21:03:21 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:10.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:10.595 21:03:21 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:10.595 21:03:21 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:10.595 21:03:21 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:10.595 21:03:21 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:10.595 [2024-07-14 21:03:21.938571] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:10.595 [2024-07-14 21:03:21.938735] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:11.161 EAL: TSC is not safe to use in SMP mode 00:04:11.161 EAL: TSC is not invariant 00:04:11.161 [2024-07-14 21:03:22.486209] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:11.161 [2024-07-14 21:03:22.564369] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:11.161 [2024-07-14 21:03:22.566691] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:11.161 [2024-07-14 21:03:22.566716] app.c: 607:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 45490' to capture a snapshot of events at runtime. 00:04:11.161 [2024-07-14 21:03:22.566749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:11.727 21:03:23 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:11.727 21:03:23 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:11.727 21:03:23 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:11.727 21:03:23 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:11.727 21:03:23 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:11.727 21:03:23 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:11.727 21:03:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.727 21:03:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.727 21:03:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.727 ************************************ 00:04:11.727 START TEST rpc_integrity 00:04:11.727 ************************************ 00:04:11.727 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:11.727 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:11.727 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.727 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.727 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.727 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:11.727 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:11.727 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:11.727 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:11.727 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.727 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.727 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.727 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:11.727 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:11.727 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.727 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.727 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.727 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:11.727 { 00:04:11.727 "name": "Malloc0", 00:04:11.727 "aliases": [ 00:04:11.727 "817b9d7e-4224-11ef-aa83-81fbc7dfef58" 00:04:11.727 ], 00:04:11.727 "product_name": "Malloc disk", 00:04:11.727 "block_size": 512, 00:04:11.727 "num_blocks": 16384, 00:04:11.727 "uuid": "817b9d7e-4224-11ef-aa83-81fbc7dfef58", 00:04:11.727 "assigned_rate_limits": { 00:04:11.727 "rw_ios_per_sec": 0, 00:04:11.727 "rw_mbytes_per_sec": 0, 00:04:11.727 "r_mbytes_per_sec": 0, 00:04:11.727 "w_mbytes_per_sec": 0 00:04:11.727 }, 00:04:11.727 "claimed": false, 00:04:11.727 "zoned": false, 00:04:11.728 "supported_io_types": { 00:04:11.728 "read": true, 00:04:11.728 "write": true, 00:04:11.728 "unmap": true, 00:04:11.728 "flush": true, 00:04:11.728 "reset": true, 00:04:11.728 "nvme_admin": false, 00:04:11.728 "nvme_io": false, 00:04:11.728 "nvme_io_md": false, 00:04:11.728 "write_zeroes": true, 00:04:11.728 "zcopy": true, 00:04:11.728 "get_zone_info": false, 00:04:11.728 "zone_management": false, 00:04:11.728 "zone_append": false, 00:04:11.728 "compare": false, 00:04:11.728 "compare_and_write": false, 00:04:11.728 "abort": true, 00:04:11.728 "seek_hole": false, 00:04:11.728 "seek_data": false, 00:04:11.728 "copy": true, 00:04:11.728 "nvme_iov_md": false 00:04:11.728 }, 00:04:11.728 "memory_domains": [ 00:04:11.728 { 00:04:11.728 "dma_device_id": "system", 00:04:11.728 "dma_device_type": 1 00:04:11.728 }, 00:04:11.728 { 00:04:11.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.728 "dma_device_type": 2 00:04:11.728 } 00:04:11.728 ], 00:04:11.728 "driver_specific": {} 00:04:11.728 } 00:04:11.728 ]' 00:04:11.728 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:11.728 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:11.728 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.728 [2024-07-14 21:03:23.075245] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:11.728 [2024-07-14 21:03:23.075302] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:11.728 [2024-07-14 21:03:23.075961] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3607bb437a00 00:04:11.728 [2024-07-14 21:03:23.076001] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:11.728 [2024-07-14 21:03:23.076875] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:11.728 [2024-07-14 21:03:23.076929] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:11.728 Passthru0 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.728 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.728 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:11.728 { 00:04:11.728 "name": "Malloc0", 00:04:11.728 "aliases": [ 00:04:11.728 "817b9d7e-4224-11ef-aa83-81fbc7dfef58" 00:04:11.728 ], 00:04:11.728 "product_name": "Malloc disk", 00:04:11.728 "block_size": 512, 00:04:11.728 "num_blocks": 16384, 00:04:11.728 "uuid": "817b9d7e-4224-11ef-aa83-81fbc7dfef58", 00:04:11.728 "assigned_rate_limits": { 00:04:11.728 "rw_ios_per_sec": 0, 00:04:11.728 "rw_mbytes_per_sec": 0, 00:04:11.728 "r_mbytes_per_sec": 0, 00:04:11.728 "w_mbytes_per_sec": 0 00:04:11.728 }, 00:04:11.728 "claimed": true, 00:04:11.728 "claim_type": "exclusive_write", 00:04:11.728 "zoned": false, 00:04:11.728 "supported_io_types": { 00:04:11.728 "read": true, 00:04:11.728 "write": true, 00:04:11.728 "unmap": true, 00:04:11.728 "flush": true, 00:04:11.728 "reset": true, 00:04:11.728 "nvme_admin": false, 00:04:11.728 "nvme_io": false, 00:04:11.728 "nvme_io_md": false, 00:04:11.728 "write_zeroes": true, 00:04:11.728 "zcopy": true, 00:04:11.728 "get_zone_info": false, 00:04:11.728 "zone_management": false, 00:04:11.728 "zone_append": false, 00:04:11.728 "compare": false, 00:04:11.728 "compare_and_write": false, 00:04:11.728 "abort": true, 00:04:11.728 "seek_hole": false, 00:04:11.728 "seek_data": false, 00:04:11.728 "copy": true, 00:04:11.728 "nvme_iov_md": false 00:04:11.728 }, 00:04:11.728 "memory_domains": [ 00:04:11.728 { 00:04:11.728 "dma_device_id": "system", 00:04:11.728 "dma_device_type": 1 00:04:11.728 }, 00:04:11.728 { 00:04:11.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.728 "dma_device_type": 2 00:04:11.728 } 00:04:11.728 ], 00:04:11.728 "driver_specific": {} 00:04:11.728 }, 00:04:11.728 { 00:04:11.728 "name": "Passthru0", 00:04:11.728 "aliases": [ 00:04:11.728 "7d05cebc-2f2e-485e-b3f4-ff51903fe3a8" 00:04:11.728 ], 00:04:11.728 "product_name": "passthru", 00:04:11.728 "block_size": 512, 00:04:11.728 "num_blocks": 16384, 00:04:11.728 "uuid": "7d05cebc-2f2e-485e-b3f4-ff51903fe3a8", 00:04:11.728 "assigned_rate_limits": { 00:04:11.728 "rw_ios_per_sec": 0, 00:04:11.728 "rw_mbytes_per_sec": 0, 00:04:11.728 "r_mbytes_per_sec": 0, 00:04:11.728 "w_mbytes_per_sec": 0 00:04:11.728 }, 00:04:11.728 "claimed": false, 00:04:11.728 "zoned": false, 00:04:11.728 "supported_io_types": { 00:04:11.728 "read": true, 00:04:11.728 "write": true, 00:04:11.728 "unmap": true, 00:04:11.728 "flush": true, 00:04:11.728 "reset": true, 00:04:11.728 "nvme_admin": false, 00:04:11.728 "nvme_io": false, 00:04:11.728 "nvme_io_md": false, 00:04:11.728 "write_zeroes": true, 00:04:11.728 "zcopy": true, 00:04:11.728 "get_zone_info": false, 00:04:11.728 "zone_management": false, 00:04:11.728 "zone_append": false, 00:04:11.728 "compare": false, 00:04:11.728 "compare_and_write": false, 00:04:11.728 "abort": true, 00:04:11.728 "seek_hole": false, 00:04:11.728 "seek_data": false, 00:04:11.728 "copy": true, 00:04:11.728 "nvme_iov_md": false 00:04:11.728 }, 00:04:11.728 "memory_domains": [ 00:04:11.728 { 00:04:11.728 "dma_device_id": "system", 00:04:11.728 "dma_device_type": 1 00:04:11.728 }, 00:04:11.728 { 00:04:11.728 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.728 "dma_device_type": 2 00:04:11.728 } 00:04:11.728 ], 00:04:11.728 "driver_specific": { 00:04:11.728 "passthru": { 00:04:11.728 "name": "Passthru0", 00:04:11.728 "base_bdev_name": "Malloc0" 00:04:11.728 } 00:04:11.728 } 00:04:11.728 } 00:04:11.728 ]' 00:04:11.728 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:11.728 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:11.728 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.728 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.728 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.728 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:11.728 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:11.728 21:03:23 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:11.728 00:04:11.728 real 0m0.140s 00:04:11.728 user 0m0.057s 00:04:11.728 sys 0m0.017s 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.728 21:03:23 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.728 ************************************ 00:04:11.728 END TEST rpc_integrity 00:04:11.728 ************************************ 00:04:11.728 21:03:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:11.728 21:03:23 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:11.728 21:03:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.728 21:03:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.728 21:03:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.728 ************************************ 00:04:11.728 START TEST rpc_plugins 00:04:11.728 ************************************ 00:04:11.728 21:03:23 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:11.728 21:03:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:11.728 21:03:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.728 21:03:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.728 21:03:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.728 21:03:23 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:11.728 21:03:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:11.728 21:03:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.728 21:03:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.728 21:03:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.728 21:03:23 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:11.728 { 00:04:11.728 "name": "Malloc1", 00:04:11.728 "aliases": [ 00:04:11.728 "81953f68-4224-11ef-aa83-81fbc7dfef58" 00:04:11.728 ], 00:04:11.728 "product_name": "Malloc disk", 00:04:11.728 "block_size": 4096, 00:04:11.728 "num_blocks": 256, 00:04:11.728 "uuid": "81953f68-4224-11ef-aa83-81fbc7dfef58", 00:04:11.728 "assigned_rate_limits": { 00:04:11.728 "rw_ios_per_sec": 0, 00:04:11.728 "rw_mbytes_per_sec": 0, 00:04:11.728 "r_mbytes_per_sec": 0, 00:04:11.728 "w_mbytes_per_sec": 0 00:04:11.728 }, 00:04:11.728 "claimed": false, 00:04:11.728 "zoned": false, 00:04:11.728 "supported_io_types": { 00:04:11.728 "read": true, 00:04:11.728 "write": true, 00:04:11.728 "unmap": true, 00:04:11.728 "flush": true, 00:04:11.728 "reset": true, 00:04:11.728 "nvme_admin": false, 00:04:11.728 "nvme_io": false, 00:04:11.728 "nvme_io_md": false, 00:04:11.728 "write_zeroes": true, 00:04:11.728 "zcopy": true, 00:04:11.728 "get_zone_info": false, 00:04:11.728 "zone_management": false, 00:04:11.728 "zone_append": false, 00:04:11.728 "compare": false, 00:04:11.728 "compare_and_write": false, 00:04:11.728 "abort": true, 00:04:11.728 "seek_hole": false, 00:04:11.728 "seek_data": false, 00:04:11.729 "copy": true, 00:04:11.729 "nvme_iov_md": false 00:04:11.729 }, 00:04:11.729 "memory_domains": [ 00:04:11.729 { 00:04:11.729 "dma_device_id": "system", 00:04:11.729 "dma_device_type": 1 00:04:11.729 }, 00:04:11.729 { 00:04:11.729 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.729 "dma_device_type": 2 00:04:11.729 } 00:04:11.729 ], 00:04:11.729 "driver_specific": {} 00:04:11.729 } 00:04:11.729 ]' 00:04:11.729 21:03:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:11.729 21:03:23 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:11.729 21:03:23 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:11.729 21:03:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.729 21:03:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.729 21:03:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.729 21:03:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:11.729 21:03:23 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.729 21:03:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.729 21:03:23 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.729 21:03:23 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:11.729 21:03:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:11.729 21:03:23 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:11.729 00:04:11.729 real 0m0.069s 00:04:11.729 user 0m0.022s 00:04:11.729 sys 0m0.020s 00:04:11.729 21:03:23 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.729 ************************************ 00:04:11.729 END TEST rpc_plugins 00:04:11.729 ************************************ 00:04:11.729 21:03:23 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:11.987 21:03:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:11.987 21:03:23 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:11.987 21:03:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.987 21:03:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.987 21:03:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.987 ************************************ 00:04:11.987 START TEST rpc_trace_cmd_test 00:04:11.987 ************************************ 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:11.987 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid45490", 00:04:11.987 "tpoint_group_mask": "0x8", 00:04:11.987 "iscsi_conn": { 00:04:11.987 "mask": "0x2", 00:04:11.987 "tpoint_mask": "0x0" 00:04:11.987 }, 00:04:11.987 "scsi": { 00:04:11.987 "mask": "0x4", 00:04:11.987 "tpoint_mask": "0x0" 00:04:11.987 }, 00:04:11.987 "bdev": { 00:04:11.987 "mask": "0x8", 00:04:11.987 "tpoint_mask": "0xffffffffffffffff" 00:04:11.987 }, 00:04:11.987 "nvmf_rdma": { 00:04:11.987 "mask": "0x10", 00:04:11.987 "tpoint_mask": "0x0" 00:04:11.987 }, 00:04:11.987 "nvmf_tcp": { 00:04:11.987 "mask": "0x20", 00:04:11.987 "tpoint_mask": "0x0" 00:04:11.987 }, 00:04:11.987 "blobfs": { 00:04:11.987 "mask": "0x80", 00:04:11.987 "tpoint_mask": "0x0" 00:04:11.987 }, 00:04:11.987 "dsa": { 00:04:11.987 "mask": "0x200", 00:04:11.987 "tpoint_mask": "0x0" 00:04:11.987 }, 00:04:11.987 "thread": { 00:04:11.987 "mask": "0x400", 00:04:11.987 "tpoint_mask": "0x0" 00:04:11.987 }, 00:04:11.987 "nvme_pcie": { 00:04:11.987 "mask": "0x800", 00:04:11.987 "tpoint_mask": "0x0" 00:04:11.987 }, 00:04:11.987 "iaa": { 00:04:11.987 "mask": "0x1000", 00:04:11.987 "tpoint_mask": "0x0" 00:04:11.987 }, 00:04:11.987 "nvme_tcp": { 00:04:11.987 "mask": "0x2000", 00:04:11.987 "tpoint_mask": "0x0" 00:04:11.987 }, 00:04:11.987 "bdev_nvme": { 00:04:11.987 "mask": "0x4000", 00:04:11.987 "tpoint_mask": "0x0" 00:04:11.987 }, 00:04:11.987 "sock": { 00:04:11.987 "mask": "0x8000", 00:04:11.987 "tpoint_mask": "0x0" 00:04:11.987 } 00:04:11.987 }' 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 15 -gt 2 ']' 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:11.987 00:04:11.987 real 0m0.058s 00:04:11.987 user 0m0.023s 00:04:11.987 sys 0m0.028s 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:11.987 ************************************ 00:04:11.987 END TEST rpc_trace_cmd_test 00:04:11.987 ************************************ 00:04:11.987 21:03:23 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:11.987 21:03:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:11.987 21:03:23 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:11.987 21:03:23 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:11.987 21:03:23 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:11.987 21:03:23 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:11.987 21:03:23 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:11.987 21:03:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:11.987 ************************************ 00:04:11.987 START TEST rpc_daemon_integrity 00:04:11.987 ************************************ 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:11.987 { 00:04:11.987 "name": "Malloc2", 00:04:11.987 "aliases": [ 00:04:11.987 "81b9e017-4224-11ef-aa83-81fbc7dfef58" 00:04:11.987 ], 00:04:11.987 "product_name": "Malloc disk", 00:04:11.987 "block_size": 512, 00:04:11.987 "num_blocks": 16384, 00:04:11.987 "uuid": "81b9e017-4224-11ef-aa83-81fbc7dfef58", 00:04:11.987 "assigned_rate_limits": { 00:04:11.987 "rw_ios_per_sec": 0, 00:04:11.987 "rw_mbytes_per_sec": 0, 00:04:11.987 "r_mbytes_per_sec": 0, 00:04:11.987 "w_mbytes_per_sec": 0 00:04:11.987 }, 00:04:11.987 "claimed": false, 00:04:11.987 "zoned": false, 00:04:11.987 "supported_io_types": { 00:04:11.987 "read": true, 00:04:11.987 "write": true, 00:04:11.987 "unmap": true, 00:04:11.987 "flush": true, 00:04:11.987 "reset": true, 00:04:11.987 "nvme_admin": false, 00:04:11.987 "nvme_io": false, 00:04:11.987 "nvme_io_md": false, 00:04:11.987 "write_zeroes": true, 00:04:11.987 "zcopy": true, 00:04:11.987 "get_zone_info": false, 00:04:11.987 "zone_management": false, 00:04:11.987 "zone_append": false, 00:04:11.987 "compare": false, 00:04:11.987 "compare_and_write": false, 00:04:11.987 "abort": true, 00:04:11.987 "seek_hole": false, 00:04:11.987 "seek_data": false, 00:04:11.987 "copy": true, 00:04:11.987 "nvme_iov_md": false 00:04:11.987 }, 00:04:11.987 "memory_domains": [ 00:04:11.987 { 00:04:11.987 "dma_device_id": "system", 00:04:11.987 "dma_device_type": 1 00:04:11.987 }, 00:04:11.987 { 00:04:11.987 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.987 "dma_device_type": 2 00:04:11.987 } 00:04:11.987 ], 00:04:11.987 "driver_specific": {} 00:04:11.987 } 00:04:11.987 ]' 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.987 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.987 [2024-07-14 21:03:23.487296] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:11.988 [2024-07-14 21:03:23.487348] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:11.988 [2024-07-14 21:03:23.487388] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3607bb437a00 00:04:11.988 [2024-07-14 21:03:23.487396] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:11.988 [2024-07-14 21:03:23.487822] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:11.988 [2024-07-14 21:03:23.487847] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:11.988 Passthru0 00:04:11.988 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.988 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:11.988 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.988 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.988 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.988 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:11.988 { 00:04:11.988 "name": "Malloc2", 00:04:11.988 "aliases": [ 00:04:11.988 "81b9e017-4224-11ef-aa83-81fbc7dfef58" 00:04:11.988 ], 00:04:11.988 "product_name": "Malloc disk", 00:04:11.988 "block_size": 512, 00:04:11.988 "num_blocks": 16384, 00:04:11.988 "uuid": "81b9e017-4224-11ef-aa83-81fbc7dfef58", 00:04:11.988 "assigned_rate_limits": { 00:04:11.988 "rw_ios_per_sec": 0, 00:04:11.988 "rw_mbytes_per_sec": 0, 00:04:11.988 "r_mbytes_per_sec": 0, 00:04:11.988 "w_mbytes_per_sec": 0 00:04:11.988 }, 00:04:11.988 "claimed": true, 00:04:11.988 "claim_type": "exclusive_write", 00:04:11.988 "zoned": false, 00:04:11.988 "supported_io_types": { 00:04:11.988 "read": true, 00:04:11.988 "write": true, 00:04:11.988 "unmap": true, 00:04:11.988 "flush": true, 00:04:11.988 "reset": true, 00:04:11.988 "nvme_admin": false, 00:04:11.988 "nvme_io": false, 00:04:11.988 "nvme_io_md": false, 00:04:11.988 "write_zeroes": true, 00:04:11.988 "zcopy": true, 00:04:11.988 "get_zone_info": false, 00:04:11.988 "zone_management": false, 00:04:11.988 "zone_append": false, 00:04:11.988 "compare": false, 00:04:11.988 "compare_and_write": false, 00:04:11.988 "abort": true, 00:04:11.988 "seek_hole": false, 00:04:11.988 "seek_data": false, 00:04:11.988 "copy": true, 00:04:11.988 "nvme_iov_md": false 00:04:11.988 }, 00:04:11.988 "memory_domains": [ 00:04:11.988 { 00:04:11.988 "dma_device_id": "system", 00:04:11.988 "dma_device_type": 1 00:04:11.988 }, 00:04:11.988 { 00:04:11.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.988 "dma_device_type": 2 00:04:11.988 } 00:04:11.988 ], 00:04:11.988 "driver_specific": {} 00:04:11.988 }, 00:04:11.988 { 00:04:11.988 "name": "Passthru0", 00:04:11.988 "aliases": [ 00:04:11.988 "bd8d659c-7272-e55f-ac63-c567bdf71da0" 00:04:11.988 ], 00:04:11.988 "product_name": "passthru", 00:04:11.988 "block_size": 512, 00:04:11.988 "num_blocks": 16384, 00:04:11.988 "uuid": "bd8d659c-7272-e55f-ac63-c567bdf71da0", 00:04:11.988 "assigned_rate_limits": { 00:04:11.988 "rw_ios_per_sec": 0, 00:04:11.988 "rw_mbytes_per_sec": 0, 00:04:11.988 "r_mbytes_per_sec": 0, 00:04:11.988 "w_mbytes_per_sec": 0 00:04:11.988 }, 00:04:11.988 "claimed": false, 00:04:11.988 "zoned": false, 00:04:11.988 "supported_io_types": { 00:04:11.988 "read": true, 00:04:11.988 "write": true, 00:04:11.988 "unmap": true, 00:04:11.988 "flush": true, 00:04:11.988 "reset": true, 00:04:11.988 "nvme_admin": false, 00:04:11.988 "nvme_io": false, 00:04:11.988 "nvme_io_md": false, 00:04:11.988 "write_zeroes": true, 00:04:11.988 "zcopy": true, 00:04:11.988 "get_zone_info": false, 00:04:11.988 "zone_management": false, 00:04:11.988 "zone_append": false, 00:04:11.988 "compare": false, 00:04:11.988 "compare_and_write": false, 00:04:11.988 "abort": true, 00:04:11.988 "seek_hole": false, 00:04:11.988 "seek_data": false, 00:04:11.988 "copy": true, 00:04:11.988 "nvme_iov_md": false 00:04:11.988 }, 00:04:11.988 "memory_domains": [ 00:04:11.988 { 00:04:11.988 "dma_device_id": "system", 00:04:11.988 "dma_device_type": 1 00:04:11.988 }, 00:04:11.988 { 00:04:11.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:11.988 "dma_device_type": 2 00:04:11.988 } 00:04:11.988 ], 00:04:11.988 "driver_specific": { 00:04:11.988 "passthru": { 00:04:11.988 "name": "Passthru0", 00:04:11.988 "base_bdev_name": "Malloc2" 00:04:11.988 } 00:04:11.988 } 00:04:11.988 } 00:04:11.988 ]' 00:04:11.988 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:11.988 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:11.988 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:11.988 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.988 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:11.988 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:11.988 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:11.988 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:11.988 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.246 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.246 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:12.246 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:12.246 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.246 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:12.246 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:12.246 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:12.246 21:03:23 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:12.246 00:04:12.246 real 0m0.128s 00:04:12.246 user 0m0.007s 00:04:12.246 sys 0m0.066s 00:04:12.246 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.246 ************************************ 00:04:12.246 END TEST rpc_daemon_integrity 00:04:12.246 ************************************ 00:04:12.246 21:03:23 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:12.246 21:03:23 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:12.246 21:03:23 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:12.246 21:03:23 rpc -- rpc/rpc.sh@84 -- # killprocess 45490 00:04:12.246 21:03:23 rpc -- common/autotest_common.sh@948 -- # '[' -z 45490 ']' 00:04:12.246 21:03:23 rpc -- common/autotest_common.sh@952 -- # kill -0 45490 00:04:12.246 21:03:23 rpc -- common/autotest_common.sh@953 -- # uname 00:04:12.246 21:03:23 rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:12.246 21:03:23 rpc -- common/autotest_common.sh@956 -- # ps -c -o command 45490 00:04:12.246 21:03:23 rpc -- common/autotest_common.sh@956 -- # tail -1 00:04:12.247 21:03:23 rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:12.247 21:03:23 rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:12.247 killing process with pid 45490 00:04:12.247 21:03:23 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45490' 00:04:12.247 21:03:23 rpc -- common/autotest_common.sh@967 -- # kill 45490 00:04:12.247 21:03:23 rpc -- common/autotest_common.sh@972 -- # wait 45490 00:04:12.505 00:04:12.505 real 0m2.071s 00:04:12.505 user 0m2.160s 00:04:12.505 sys 0m0.907s 00:04:12.505 21:03:23 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:12.505 21:03:23 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.505 ************************************ 00:04:12.505 END TEST rpc 00:04:12.505 ************************************ 00:04:12.505 21:03:23 -- common/autotest_common.sh@1142 -- # return 0 00:04:12.505 21:03:23 -- spdk/autotest.sh@170 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:12.505 21:03:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.505 21:03:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.505 21:03:23 -- common/autotest_common.sh@10 -- # set +x 00:04:12.505 ************************************ 00:04:12.505 START TEST skip_rpc 00:04:12.505 ************************************ 00:04:12.505 21:03:23 skip_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:12.505 * Looking for test storage... 00:04:12.505 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:12.505 21:03:24 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:12.505 21:03:24 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:12.505 21:03:24 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:12.505 21:03:24 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:12.505 21:03:24 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:12.505 21:03:24 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:12.764 ************************************ 00:04:12.764 START TEST skip_rpc 00:04:12.764 ************************************ 00:04:12.764 21:03:24 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:12.764 21:03:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=45666 00:04:12.764 21:03:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:12.764 21:03:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:12.764 21:03:24 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:12.764 [2024-07-14 21:03:24.070715] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:12.764 [2024-07-14 21:03:24.071034] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:13.331 EAL: TSC is not safe to use in SMP mode 00:04:13.331 EAL: TSC is not invariant 00:04:13.331 [2024-07-14 21:03:24.613987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:13.331 [2024-07-14 21:03:24.690845] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:13.331 [2024-07-14 21:03:24.693250] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 45666 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 45666 ']' 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 45666 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # tail -1 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 45666 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:18.600 killing process with pid 45666 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45666' 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 45666 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 45666 00:04:18.600 00:04:18.600 real 0m5.417s 00:04:18.600 user 0m4.859s 00:04:18.600 sys 0m0.575s 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:18.600 21:03:29 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.600 ************************************ 00:04:18.600 END TEST skip_rpc 00:04:18.600 ************************************ 00:04:18.600 21:03:29 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:18.600 21:03:29 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:18.600 21:03:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:18.600 21:03:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:18.600 21:03:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.600 ************************************ 00:04:18.600 START TEST skip_rpc_with_json 00:04:18.600 ************************************ 00:04:18.600 21:03:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:18.600 21:03:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:18.600 21:03:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=45711 00:04:18.600 21:03:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:18.600 21:03:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.600 21:03:29 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 45711 00:04:18.600 21:03:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 45711 ']' 00:04:18.600 21:03:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:18.601 21:03:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:18.601 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:18.601 21:03:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:18.601 21:03:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:18.601 21:03:29 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:18.601 [2024-07-14 21:03:29.535169] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:18.601 [2024-07-14 21:03:29.535367] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:18.601 EAL: TSC is not safe to use in SMP mode 00:04:18.601 EAL: TSC is not invariant 00:04:18.601 [2024-07-14 21:03:30.065905] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.601 [2024-07-14 21:03:30.144885] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:18.601 [2024-07-14 21:03:30.147254] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.168 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:19.168 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:19.168 21:03:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:19.168 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.168 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.168 [2024-07-14 21:03:30.552305] nvmf_rpc.c:2562:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:19.168 request: 00:04:19.168 { 00:04:19.168 "trtype": "tcp", 00:04:19.168 "method": "nvmf_get_transports", 00:04:19.168 "req_id": 1 00:04:19.168 } 00:04:19.168 Got JSON-RPC error response 00:04:19.168 response: 00:04:19.168 { 00:04:19.168 "code": -19, 00:04:19.168 "message": "Operation not supported by device" 00:04:19.168 } 00:04:19.168 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:19.168 21:03:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:19.168 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.168 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.168 [2024-07-14 21:03:30.560331] tcp.c: 672:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:19.168 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.168 21:03:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:19.168 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:19.168 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:19.168 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:19.168 21:03:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.168 { 00:04:19.168 "subsystems": [ 00:04:19.168 { 00:04:19.168 "subsystem": "vmd", 00:04:19.168 "config": [] 00:04:19.168 }, 00:04:19.168 { 00:04:19.168 "subsystem": "iobuf", 00:04:19.168 "config": [ 00:04:19.168 { 00:04:19.168 "method": "iobuf_set_options", 00:04:19.168 "params": { 00:04:19.168 "small_pool_count": 8192, 00:04:19.168 "large_pool_count": 1024, 00:04:19.168 "small_bufsize": 8192, 00:04:19.168 "large_bufsize": 135168 00:04:19.168 } 00:04:19.168 } 00:04:19.168 ] 00:04:19.168 }, 00:04:19.168 { 00:04:19.168 "subsystem": "scheduler", 00:04:19.168 "config": [ 00:04:19.168 { 00:04:19.168 "method": "framework_set_scheduler", 00:04:19.168 "params": { 00:04:19.168 "name": "static" 00:04:19.168 } 00:04:19.168 } 00:04:19.168 ] 00:04:19.168 }, 00:04:19.168 { 00:04:19.168 "subsystem": "sock", 00:04:19.168 "config": [ 00:04:19.168 { 00:04:19.168 "method": "sock_set_default_impl", 00:04:19.168 "params": { 00:04:19.168 "impl_name": "posix" 00:04:19.168 } 00:04:19.168 }, 00:04:19.168 { 00:04:19.168 "method": "sock_impl_set_options", 00:04:19.168 "params": { 00:04:19.168 "impl_name": "ssl", 00:04:19.168 "recv_buf_size": 4096, 00:04:19.168 "send_buf_size": 4096, 00:04:19.168 "enable_recv_pipe": true, 00:04:19.168 "enable_quickack": false, 00:04:19.168 "enable_placement_id": 0, 00:04:19.168 "enable_zerocopy_send_server": true, 00:04:19.168 "enable_zerocopy_send_client": false, 00:04:19.168 "zerocopy_threshold": 0, 00:04:19.168 "tls_version": 0, 00:04:19.168 "enable_ktls": false 00:04:19.168 } 00:04:19.168 }, 00:04:19.168 { 00:04:19.168 "method": "sock_impl_set_options", 00:04:19.168 "params": { 00:04:19.168 "impl_name": "posix", 00:04:19.168 "recv_buf_size": 2097152, 00:04:19.168 "send_buf_size": 2097152, 00:04:19.168 "enable_recv_pipe": true, 00:04:19.168 "enable_quickack": false, 00:04:19.168 "enable_placement_id": 0, 00:04:19.168 "enable_zerocopy_send_server": true, 00:04:19.168 "enable_zerocopy_send_client": false, 00:04:19.168 "zerocopy_threshold": 0, 00:04:19.168 "tls_version": 0, 00:04:19.168 "enable_ktls": false 00:04:19.168 } 00:04:19.168 } 00:04:19.168 ] 00:04:19.168 }, 00:04:19.168 { 00:04:19.168 "subsystem": "keyring", 00:04:19.168 "config": [] 00:04:19.168 }, 00:04:19.168 { 00:04:19.168 "subsystem": "accel", 00:04:19.168 "config": [ 00:04:19.168 { 00:04:19.168 "method": "accel_set_options", 00:04:19.168 "params": { 00:04:19.168 "small_cache_size": 128, 00:04:19.168 "large_cache_size": 16, 00:04:19.168 "task_count": 2048, 00:04:19.168 "sequence_count": 2048, 00:04:19.168 "buf_count": 2048 00:04:19.168 } 00:04:19.168 } 00:04:19.168 ] 00:04:19.168 }, 00:04:19.168 { 00:04:19.168 "subsystem": "bdev", 00:04:19.168 "config": [ 00:04:19.168 { 00:04:19.168 "method": "bdev_set_options", 00:04:19.168 "params": { 00:04:19.168 "bdev_io_pool_size": 65535, 00:04:19.168 "bdev_io_cache_size": 256, 00:04:19.168 "bdev_auto_examine": true, 00:04:19.168 "iobuf_small_cache_size": 128, 00:04:19.168 "iobuf_large_cache_size": 16 00:04:19.168 } 00:04:19.168 }, 00:04:19.168 { 00:04:19.168 "method": "bdev_raid_set_options", 00:04:19.168 "params": { 00:04:19.168 "process_window_size_kb": 1024 00:04:19.168 } 00:04:19.168 }, 00:04:19.168 { 00:04:19.168 "method": "bdev_nvme_set_options", 00:04:19.168 "params": { 00:04:19.168 "action_on_timeout": "none", 00:04:19.168 "timeout_us": 0, 00:04:19.168 "timeout_admin_us": 0, 00:04:19.168 "keep_alive_timeout_ms": 10000, 00:04:19.168 "arbitration_burst": 0, 00:04:19.168 "low_priority_weight": 0, 00:04:19.168 "medium_priority_weight": 0, 00:04:19.168 "high_priority_weight": 0, 00:04:19.168 "nvme_adminq_poll_period_us": 10000, 00:04:19.168 "nvme_ioq_poll_period_us": 0, 00:04:19.168 "io_queue_requests": 0, 00:04:19.168 "delay_cmd_submit": true, 00:04:19.168 "transport_retry_count": 4, 00:04:19.168 "bdev_retry_count": 3, 00:04:19.168 "transport_ack_timeout": 0, 00:04:19.168 "ctrlr_loss_timeout_sec": 0, 00:04:19.168 "reconnect_delay_sec": 0, 00:04:19.168 "fast_io_fail_timeout_sec": 0, 00:04:19.168 "disable_auto_failback": false, 00:04:19.168 "generate_uuids": false, 00:04:19.168 "transport_tos": 0, 00:04:19.168 "nvme_error_stat": false, 00:04:19.168 "rdma_srq_size": 0, 00:04:19.168 "io_path_stat": false, 00:04:19.168 "allow_accel_sequence": false, 00:04:19.168 "rdma_max_cq_size": 0, 00:04:19.168 "rdma_cm_event_timeout_ms": 0, 00:04:19.168 "dhchap_digests": [ 00:04:19.168 "sha256", 00:04:19.168 "sha384", 00:04:19.168 "sha512" 00:04:19.168 ], 00:04:19.168 "dhchap_dhgroups": [ 00:04:19.168 "null", 00:04:19.168 "ffdhe2048", 00:04:19.168 "ffdhe3072", 00:04:19.168 "ffdhe4096", 00:04:19.168 "ffdhe6144", 00:04:19.168 "ffdhe8192" 00:04:19.168 ] 00:04:19.168 } 00:04:19.168 }, 00:04:19.168 { 00:04:19.168 "method": "bdev_nvme_set_hotplug", 00:04:19.168 "params": { 00:04:19.168 "period_us": 100000, 00:04:19.168 "enable": false 00:04:19.168 } 00:04:19.168 }, 00:04:19.168 { 00:04:19.168 "method": "bdev_wait_for_examine" 00:04:19.169 } 00:04:19.169 ] 00:04:19.169 }, 00:04:19.169 { 00:04:19.169 "subsystem": "scsi", 00:04:19.169 "config": null 00:04:19.169 }, 00:04:19.169 { 00:04:19.169 "subsystem": "nvmf", 00:04:19.169 "config": [ 00:04:19.169 { 00:04:19.169 "method": "nvmf_set_config", 00:04:19.169 "params": { 00:04:19.169 "discovery_filter": "match_any", 00:04:19.169 "admin_cmd_passthru": { 00:04:19.169 "identify_ctrlr": false 00:04:19.169 } 00:04:19.169 } 00:04:19.169 }, 00:04:19.169 { 00:04:19.169 "method": "nvmf_set_max_subsystems", 00:04:19.169 "params": { 00:04:19.169 "max_subsystems": 1024 00:04:19.169 } 00:04:19.169 }, 00:04:19.169 { 00:04:19.169 "method": "nvmf_set_crdt", 00:04:19.169 "params": { 00:04:19.169 "crdt1": 0, 00:04:19.169 "crdt2": 0, 00:04:19.169 "crdt3": 0 00:04:19.169 } 00:04:19.169 }, 00:04:19.169 { 00:04:19.169 "method": "nvmf_create_transport", 00:04:19.169 "params": { 00:04:19.169 "trtype": "TCP", 00:04:19.169 "max_queue_depth": 128, 00:04:19.169 "max_io_qpairs_per_ctrlr": 127, 00:04:19.169 "in_capsule_data_size": 4096, 00:04:19.169 "max_io_size": 131072, 00:04:19.169 "io_unit_size": 131072, 00:04:19.169 "max_aq_depth": 128, 00:04:19.169 "num_shared_buffers": 511, 00:04:19.169 "buf_cache_size": 4294967295, 00:04:19.169 "dif_insert_or_strip": false, 00:04:19.169 "zcopy": false, 00:04:19.169 "c2h_success": true, 00:04:19.169 "sock_priority": 0, 00:04:19.169 "abort_timeout_sec": 1, 00:04:19.169 "ack_timeout": 0, 00:04:19.169 "data_wr_pool_size": 0 00:04:19.169 } 00:04:19.169 } 00:04:19.169 ] 00:04:19.169 }, 00:04:19.169 { 00:04:19.169 "subsystem": "iscsi", 00:04:19.169 "config": [ 00:04:19.169 { 00:04:19.169 "method": "iscsi_set_options", 00:04:19.169 "params": { 00:04:19.169 "node_base": "iqn.2016-06.io.spdk", 00:04:19.169 "max_sessions": 128, 00:04:19.169 "max_connections_per_session": 2, 00:04:19.169 "max_queue_depth": 64, 00:04:19.169 "default_time2wait": 2, 00:04:19.169 "default_time2retain": 20, 00:04:19.169 "first_burst_length": 8192, 00:04:19.169 "immediate_data": true, 00:04:19.169 "allow_duplicated_isid": false, 00:04:19.169 "error_recovery_level": 0, 00:04:19.169 "nop_timeout": 60, 00:04:19.169 "nop_in_interval": 30, 00:04:19.169 "disable_chap": false, 00:04:19.169 "require_chap": false, 00:04:19.169 "mutual_chap": false, 00:04:19.169 "chap_group": 0, 00:04:19.169 "max_large_datain_per_connection": 64, 00:04:19.169 "max_r2t_per_connection": 4, 00:04:19.169 "pdu_pool_size": 36864, 00:04:19.169 "immediate_data_pool_size": 16384, 00:04:19.169 "data_out_pool_size": 2048 00:04:19.169 } 00:04:19.169 } 00:04:19.169 ] 00:04:19.169 } 00:04:19.169 ] 00:04:19.169 } 00:04:19.169 21:03:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:19.169 21:03:30 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 45711 00:04:19.169 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 45711 ']' 00:04:19.169 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 45711 00:04:19.169 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:19.169 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:19.169 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps -c -o command 45711 00:04:19.169 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # tail -1 00:04:19.169 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:19.169 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:19.169 killing process with pid 45711 00:04:19.169 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45711' 00:04:19.169 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 45711 00:04:19.169 21:03:30 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 45711 00:04:19.736 21:03:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=45725 00:04:19.736 21:03:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:19.736 21:03:31 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:25.000 21:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 45725 00:04:25.000 21:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 45725 ']' 00:04:25.000 21:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 45725 00:04:25.000 21:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:25.000 21:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:25.000 21:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps -c -o command 45725 00:04:25.000 21:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # tail -1 00:04:25.000 21:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:25.001 killing process with pid 45725 00:04:25.001 21:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:25.001 21:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 45725' 00:04:25.001 21:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 45725 00:04:25.001 21:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 45725 00:04:25.001 21:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:25.001 21:03:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:25.001 00:04:25.001 real 0m7.014s 00:04:25.001 user 0m6.157s 00:04:25.001 sys 0m1.413s 00:04:25.001 21:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.001 21:03:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.001 ************************************ 00:04:25.001 END TEST skip_rpc_with_json 00:04:25.001 ************************************ 00:04:25.259 21:03:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:25.260 21:03:36 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:25.260 21:03:36 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.260 21:03:36 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.260 21:03:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.260 ************************************ 00:04:25.260 START TEST skip_rpc_with_delay 00:04:25.260 ************************************ 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:25.260 [2024-07-14 21:03:36.598888] app.c: 831:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:25.260 [2024-07-14 21:03:36.599141] app.c: 710:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:25.260 00:04:25.260 real 0m0.010s 00:04:25.260 user 0m0.007s 00:04:25.260 sys 0m0.006s 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.260 ************************************ 00:04:25.260 END TEST skip_rpc_with_delay 00:04:25.260 ************************************ 00:04:25.260 21:03:36 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:25.260 21:03:36 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:25.260 21:03:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:25.260 21:03:36 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' FreeBSD '!=' FreeBSD ']' 00:04:25.260 21:03:36 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.260 00:04:25.260 real 0m12.736s 00:04:25.260 user 0m11.164s 00:04:25.260 sys 0m2.178s 00:04:25.260 21:03:36 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.260 21:03:36 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.260 ************************************ 00:04:25.260 END TEST skip_rpc 00:04:25.260 ************************************ 00:04:25.260 21:03:36 -- common/autotest_common.sh@1142 -- # return 0 00:04:25.260 21:03:36 -- spdk/autotest.sh@171 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:25.260 21:03:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.260 21:03:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.260 21:03:36 -- common/autotest_common.sh@10 -- # set +x 00:04:25.260 ************************************ 00:04:25.260 START TEST rpc_client 00:04:25.260 ************************************ 00:04:25.260 21:03:36 rpc_client -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:25.518 * Looking for test storage... 00:04:25.518 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:25.518 21:03:36 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:25.518 OK 00:04:25.518 21:03:36 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:25.518 00:04:25.518 real 0m0.141s 00:04:25.518 user 0m0.094s 00:04:25.518 sys 0m0.112s 00:04:25.518 21:03:36 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:25.518 ************************************ 00:04:25.518 END TEST rpc_client 00:04:25.518 ************************************ 00:04:25.518 21:03:36 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:25.519 21:03:36 -- common/autotest_common.sh@1142 -- # return 0 00:04:25.519 21:03:36 -- spdk/autotest.sh@172 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:25.519 21:03:36 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:25.519 21:03:36 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:25.519 21:03:36 -- common/autotest_common.sh@10 -- # set +x 00:04:25.519 ************************************ 00:04:25.519 START TEST json_config 00:04:25.519 ************************************ 00:04:25.519 21:03:36 json_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:25.519 21:03:37 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:25.519 21:03:37 json_config -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:04:25.519 21:03:37 json_config -- nvmf/common.sh@7 -- # return 0 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/spdk_tgt_config.json' ['initiator']='/home/vagrant/spdk_repo/spdk/spdk_initiator_config.json') 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@355 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:25.519 INFO: JSON configuration test init 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@356 -- # echo 'INFO: JSON configuration test init' 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@357 -- # json_config_test_init 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@262 -- # timing_enter json_config_test_init 00:04:25.519 21:03:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:25.519 21:03:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@263 -- # timing_enter json_config_setup_target 00:04:25.519 21:03:37 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:25.519 21:03:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.519 21:03:37 json_config -- json_config/json_config.sh@265 -- # json_config_test_start_app target --wait-for-rpc 00:04:25.519 21:03:37 json_config -- json_config/common.sh@9 -- # local app=target 00:04:25.519 21:03:37 json_config -- json_config/common.sh@10 -- # shift 00:04:25.519 21:03:37 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:25.519 21:03:37 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:25.519 21:03:37 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:25.519 21:03:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.519 21:03:37 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:25.519 21:03:37 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=45884 00:04:25.519 21:03:37 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:25.519 21:03:37 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:25.519 Waiting for target to run... 00:04:25.519 21:03:37 json_config -- json_config/common.sh@25 -- # waitforlisten 45884 /var/tmp/spdk_tgt.sock 00:04:25.519 21:03:37 json_config -- common/autotest_common.sh@829 -- # '[' -z 45884 ']' 00:04:25.519 21:03:37 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:25.519 21:03:37 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:25.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:25.519 21:03:37 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:25.519 21:03:37 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:25.519 21:03:37 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:25.519 [2024-07-14 21:03:37.036730] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:25.519 [2024-07-14 21:03:37.036971] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:26.086 EAL: TSC is not safe to use in SMP mode 00:04:26.086 EAL: TSC is not invariant 00:04:26.086 [2024-07-14 21:03:37.357140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:26.086 [2024-07-14 21:03:37.452118] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:26.086 [2024-07-14 21:03:37.454593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.652 21:03:38 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:26.652 21:03:38 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:26.652 00:04:26.652 21:03:38 json_config -- json_config/common.sh@26 -- # echo '' 00:04:26.652 21:03:38 json_config -- json_config/json_config.sh@269 -- # create_accel_config 00:04:26.652 21:03:38 json_config -- json_config/json_config.sh@93 -- # timing_enter create_accel_config 00:04:26.652 21:03:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:26.652 21:03:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.652 21:03:38 json_config -- json_config/json_config.sh@95 -- # [[ 0 -eq 1 ]] 00:04:26.652 21:03:38 json_config -- json_config/json_config.sh@101 -- # timing_exit create_accel_config 00:04:26.652 21:03:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:26.652 21:03:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.652 21:03:38 json_config -- json_config/json_config.sh@273 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:26.652 21:03:38 json_config -- json_config/json_config.sh@274 -- # tgt_rpc load_config 00:04:26.652 21:03:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:26.911 [2024-07-14 21:03:38.394805] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:04:27.169 21:03:38 json_config -- json_config/json_config.sh@276 -- # tgt_check_notification_types 00:04:27.169 21:03:38 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:27.169 21:03:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:27.169 21:03:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.169 21:03:38 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:27.169 21:03:38 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:27.170 21:03:38 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:27.170 21:03:38 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:27.170 21:03:38 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:27.170 21:03:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:27.428 21:03:38 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:27.428 21:03:38 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:27.428 21:03:38 json_config -- json_config/json_config.sh@49 -- # [[ bdev_register bdev_unregister != \b\d\e\v\_\r\e\g\i\s\t\e\r\ \b\d\e\v\_\u\n\r\e\g\i\s\t\e\r ]] 00:04:27.428 21:03:38 json_config -- json_config/json_config.sh@54 -- # timing_exit tgt_check_notification_types 00:04:27.428 21:03:38 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:27.428 21:03:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.428 21:03:38 json_config -- json_config/json_config.sh@55 -- # return 0 00:04:27.428 21:03:38 json_config -- json_config/json_config.sh@278 -- # [[ 1 -eq 1 ]] 00:04:27.428 21:03:38 json_config -- json_config/json_config.sh@279 -- # create_bdev_subsystem_config 00:04:27.428 21:03:38 json_config -- json_config/json_config.sh@105 -- # timing_enter create_bdev_subsystem_config 00:04:27.428 21:03:38 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:27.428 21:03:38 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:27.429 21:03:38 json_config -- json_config/json_config.sh@107 -- # expected_notifications=() 00:04:27.429 21:03:38 json_config -- json_config/json_config.sh@107 -- # local expected_notifications 00:04:27.429 21:03:38 json_config -- json_config/json_config.sh@111 -- # expected_notifications+=($(get_notifications)) 00:04:27.429 21:03:38 json_config -- json_config/json_config.sh@111 -- # get_notifications 00:04:27.429 21:03:38 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:04:27.429 21:03:38 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:27.429 21:03:38 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:27.429 21:03:38 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:04:27.429 21:03:38 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:04:27.429 21:03:38 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:04:27.686 21:03:39 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:04:27.686 21:03:39 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:27.686 21:03:39 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:27.686 21:03:39 json_config -- json_config/json_config.sh@113 -- # [[ 1 -eq 1 ]] 00:04:27.686 21:03:39 json_config -- json_config/json_config.sh@114 -- # local lvol_store_base_bdev=Nvme0n1 00:04:27.686 21:03:39 json_config -- json_config/json_config.sh@116 -- # tgt_rpc bdev_split_create Nvme0n1 2 00:04:27.686 21:03:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Nvme0n1 2 00:04:27.944 Nvme0n1p0 Nvme0n1p1 00:04:27.944 21:03:39 json_config -- json_config/json_config.sh@117 -- # tgt_rpc bdev_split_create Malloc0 3 00:04:27.944 21:03:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_split_create Malloc0 3 00:04:28.203 [2024-07-14 21:03:39.535892] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:28.203 [2024-07-14 21:03:39.535962] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:28.203 00:04:28.203 21:03:39 json_config -- json_config/json_config.sh@118 -- # tgt_rpc bdev_malloc_create 8 4096 --name Malloc3 00:04:28.203 21:03:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 4096 --name Malloc3 00:04:28.462 Malloc3 00:04:28.462 21:03:39 json_config -- json_config/json_config.sh@119 -- # tgt_rpc bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:04:28.462 21:03:39 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_passthru_create -b Malloc3 -p PTBdevFromMalloc3 00:04:28.753 [2024-07-14 21:03:40.099984] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:28.753 [2024-07-14 21:03:40.100085] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:28.753 [2024-07-14 21:03:40.100122] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x258c12438180 00:04:28.753 [2024-07-14 21:03:40.100135] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:28.753 [2024-07-14 21:03:40.100981] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:28.753 [2024-07-14 21:03:40.101019] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:04:28.753 PTBdevFromMalloc3 00:04:28.753 21:03:40 json_config -- json_config/json_config.sh@121 -- # tgt_rpc bdev_null_create Null0 32 512 00:04:28.753 21:03:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_null_create Null0 32 512 00:04:29.011 Null0 00:04:29.011 21:03:40 json_config -- json_config/json_config.sh@123 -- # tgt_rpc bdev_malloc_create 32 512 --name Malloc0 00:04:29.011 21:03:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 32 512 --name Malloc0 00:04:29.011 Malloc0 00:04:29.269 21:03:40 json_config -- json_config/json_config.sh@124 -- # tgt_rpc bdev_malloc_create 16 4096 --name Malloc1 00:04:29.269 21:03:40 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 16 4096 --name Malloc1 00:04:29.527 Malloc1 00:04:29.527 21:03:40 json_config -- json_config/json_config.sh@137 -- # expected_notifications+=(bdev_register:${lvol_store_base_bdev}p1 bdev_register:${lvol_store_base_bdev}p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1) 00:04:29.527 21:03:40 json_config -- json_config/json_config.sh@140 -- # dd if=/dev/zero of=/sample_aio bs=1024 count=102400 00:04:29.785 102400+0 records in 00:04:29.785 102400+0 records out 00:04:29.785 104857600 bytes transferred in 0.354828 secs (295516768 bytes/sec) 00:04:29.785 21:03:41 json_config -- json_config/json_config.sh@141 -- # tgt_rpc bdev_aio_create /sample_aio aio_disk 1024 00:04:29.785 21:03:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_aio_create /sample_aio aio_disk 1024 00:04:30.043 aio_disk 00:04:30.043 21:03:41 json_config -- json_config/json_config.sh@142 -- # expected_notifications+=(bdev_register:aio_disk) 00:04:30.043 21:03:41 json_config -- json_config/json_config.sh@147 -- # tgt_rpc bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:04:30.043 21:03:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create_lvstore -c 1048576 Nvme0n1p0 lvs_test 00:04:30.302 8c9c4563-4224-11ef-aa83-81fbc7dfef58 00:04:30.302 21:03:41 json_config -- json_config/json_config.sh@154 -- # expected_notifications+=("bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test lvol0 32)" "bdev_register:$(tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32)" "bdev_register:$(tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0)" "bdev_register:$(tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0)") 00:04:30.302 21:03:41 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test lvol0 32 00:04:30.302 21:03:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test lvol0 32 00:04:30.560 21:03:41 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_create -l lvs_test -t lvol1 32 00:04:30.560 21:03:41 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_create -l lvs_test -t lvol1 32 00:04:30.818 21:03:42 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:04:30.818 21:03:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_snapshot lvs_test/lvol0 snapshot0 00:04:31.077 21:03:42 json_config -- json_config/json_config.sh@154 -- # tgt_rpc bdev_lvol_clone lvs_test/snapshot0 clone0 00:04:31.077 21:03:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_clone lvs_test/snapshot0 clone0 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@157 -- # [[ 0 -eq 1 ]] 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@172 -- # [[ 0 -eq 1 ]] 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@178 -- # tgt_check_notifications bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:8cbf0fb0-4224-11ef-aa83-81fbc7dfef58 bdev_register:8cef485a-4224-11ef-aa83-81fbc7dfef58 bdev_register:8d1a9eeb-4224-11ef-aa83-81fbc7dfef58 bdev_register:8d3e05bb-4224-11ef-aa83-81fbc7dfef58 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@67 -- # local events_to_check 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@68 -- # local recorded_events 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@71 -- # events_to_check=($(printf '%s\n' "$@" | sort)) 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@71 -- # printf '%s\n' bdev_register:Nvme0n1 bdev_register:Nvme0n1p1 bdev_register:Nvme0n1p0 bdev_register:Malloc3 bdev_register:PTBdevFromMalloc3 bdev_register:Null0 bdev_register:Malloc0 bdev_register:Malloc0p2 bdev_register:Malloc0p1 bdev_register:Malloc0p0 bdev_register:Malloc1 bdev_register:aio_disk bdev_register:8cbf0fb0-4224-11ef-aa83-81fbc7dfef58 bdev_register:8cef485a-4224-11ef-aa83-81fbc7dfef58 bdev_register:8d1a9eeb-4224-11ef-aa83-81fbc7dfef58 bdev_register:8d3e05bb-4224-11ef-aa83-81fbc7dfef58 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@71 -- # sort 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@72 -- # recorded_events=($(get_notifications | sort)) 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@72 -- # get_notifications 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@72 -- # sort 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@59 -- # local ev_type ev_ctx event_id 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@58 -- # jq -r '.[] | "\(.type):\(.ctx):\(.id)"' 00:04:31.352 21:03:42 json_config -- json_config/json_config.sh@58 -- # tgt_rpc notify_get_notifications -i 0 00:04:31.352 21:03:42 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_notifications -i 0 00:04:31.610 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p1 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Nvme0n1p0 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc3 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:PTBdevFromMalloc3 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Null0 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p2 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p1 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc0p0 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:Malloc1 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:aio_disk 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:8cbf0fb0-4224-11ef-aa83-81fbc7dfef58 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:8cef485a-4224-11ef-aa83-81fbc7dfef58 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:8d1a9eeb-4224-11ef-aa83-81fbc7dfef58 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@62 -- # echo bdev_register:8d3e05bb-4224-11ef-aa83-81fbc7dfef58 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # IFS=: 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@61 -- # read -r ev_type ev_ctx event_id 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@74 -- # [[ bdev_register:8cbf0fb0-4224-11ef-aa83-81fbc7dfef58 bdev_register:8cef485a-4224-11ef-aa83-81fbc7dfef58 bdev_register:8d1a9eeb-4224-11ef-aa83-81fbc7dfef58 bdev_register:8d3e05bb-4224-11ef-aa83-81fbc7dfef58 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk != \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\c\b\f\0\f\b\0\-\4\2\2\4\-\1\1\e\f\-\a\a\8\3\-\8\1\f\b\c\7\d\f\e\f\5\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\c\e\f\4\8\5\a\-\4\2\2\4\-\1\1\e\f\-\a\a\8\3\-\8\1\f\b\c\7\d\f\e\f\5\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\d\1\a\9\e\e\b\-\4\2\2\4\-\1\1\e\f\-\a\a\8\3\-\8\1\f\b\c\7\d\f\e\f\5\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\8\d\3\e\0\5\b\b\-\4\2\2\4\-\1\1\e\f\-\a\a\8\3\-\8\1\f\b\c\7\d\f\e\f\5\8\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\0\p\2\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\u\l\l\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\0\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\N\v\m\e\0\n\1\p\1\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\P\T\B\d\e\v\F\r\o\m\M\a\l\l\o\c\3\ \b\d\e\v\_\r\e\g\i\s\t\e\r\:\a\i\o\_\d\i\s\k ]] 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@86 -- # cat 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@86 -- # printf ' %s\n' bdev_register:8cbf0fb0-4224-11ef-aa83-81fbc7dfef58 bdev_register:8cef485a-4224-11ef-aa83-81fbc7dfef58 bdev_register:8d1a9eeb-4224-11ef-aa83-81fbc7dfef58 bdev_register:8d3e05bb-4224-11ef-aa83-81fbc7dfef58 bdev_register:Malloc0 bdev_register:Malloc0p0 bdev_register:Malloc0p1 bdev_register:Malloc0p2 bdev_register:Malloc1 bdev_register:Malloc3 bdev_register:Null0 bdev_register:Nvme0n1 bdev_register:Nvme0n1p0 bdev_register:Nvme0n1p1 bdev_register:PTBdevFromMalloc3 bdev_register:aio_disk 00:04:31.611 Expected events matched: 00:04:31.611 bdev_register:8cbf0fb0-4224-11ef-aa83-81fbc7dfef58 00:04:31.611 bdev_register:8cef485a-4224-11ef-aa83-81fbc7dfef58 00:04:31.611 bdev_register:8d1a9eeb-4224-11ef-aa83-81fbc7dfef58 00:04:31.611 bdev_register:8d3e05bb-4224-11ef-aa83-81fbc7dfef58 00:04:31.611 bdev_register:Malloc0 00:04:31.611 bdev_register:Malloc0p0 00:04:31.611 bdev_register:Malloc0p1 00:04:31.611 bdev_register:Malloc0p2 00:04:31.611 bdev_register:Malloc1 00:04:31.611 bdev_register:Malloc3 00:04:31.611 bdev_register:Null0 00:04:31.611 bdev_register:Nvme0n1 00:04:31.611 bdev_register:Nvme0n1p0 00:04:31.611 bdev_register:Nvme0n1p1 00:04:31.611 bdev_register:PTBdevFromMalloc3 00:04:31.611 bdev_register:aio_disk 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@180 -- # timing_exit create_bdev_subsystem_config 00:04:31.611 21:03:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:31.611 21:03:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@293 -- # timing_exit json_config_setup_target 00:04:31.611 21:03:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:31.611 21:03:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@295 -- # [[ 0 -eq 1 ]] 00:04:31.611 21:03:43 json_config -- json_config/json_config.sh@300 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.611 21:03:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:31.868 MallocBdevForConfigChangeCheck 00:04:31.868 21:03:43 json_config -- json_config/json_config.sh@302 -- # timing_exit json_config_test_init 00:04:31.868 21:03:43 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:31.868 21:03:43 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.126 21:03:43 json_config -- json_config/json_config.sh@359 -- # tgt_rpc save_config 00:04:32.126 21:03:43 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.383 INFO: shutting down applications... 00:04:32.383 21:03:43 json_config -- json_config/json_config.sh@361 -- # echo 'INFO: shutting down applications...' 00:04:32.383 21:03:43 json_config -- json_config/json_config.sh@362 -- # [[ 0 -eq 1 ]] 00:04:32.383 21:03:43 json_config -- json_config/json_config.sh@368 -- # json_config_clear target 00:04:32.383 21:03:43 json_config -- json_config/json_config.sh@332 -- # [[ -n 22 ]] 00:04:32.383 21:03:43 json_config -- json_config/json_config.sh@333 -- # /home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:32.641 [2024-07-14 21:03:43.952418] vbdev_lvol.c: 151:vbdev_lvs_hotremove_cb: *NOTICE*: bdev Nvme0n1p0 being removed: closing lvstore lvs_test 00:04:32.641 Calling clear_iscsi_subsystem 00:04:32.641 Calling clear_nvmf_subsystem 00:04:32.641 Calling clear_bdev_subsystem 00:04:32.641 21:03:44 json_config -- json_config/json_config.sh@337 -- # local config_filter=/home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py 00:04:32.641 21:03:44 json_config -- json_config/json_config.sh@343 -- # count=100 00:04:32.641 21:03:44 json_config -- json_config/json_config.sh@344 -- # '[' 100 -gt 0 ']' 00:04:32.641 21:03:44 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:32.641 21:03:44 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:32.641 21:03:44 json_config -- json_config/json_config.sh@345 -- # /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method check_empty 00:04:33.207 21:03:44 json_config -- json_config/json_config.sh@345 -- # break 00:04:33.207 21:03:44 json_config -- json_config/json_config.sh@350 -- # '[' 100 -eq 0 ']' 00:04:33.207 21:03:44 json_config -- json_config/json_config.sh@369 -- # json_config_test_shutdown_app target 00:04:33.207 21:03:44 json_config -- json_config/common.sh@31 -- # local app=target 00:04:33.207 21:03:44 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:33.207 21:03:44 json_config -- json_config/common.sh@35 -- # [[ -n 45884 ]] 00:04:33.207 21:03:44 json_config -- json_config/common.sh@38 -- # kill -SIGINT 45884 00:04:33.207 21:03:44 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:33.207 21:03:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.207 21:03:44 json_config -- json_config/common.sh@41 -- # kill -0 45884 00:04:33.207 21:03:44 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:33.465 21:03:44 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:33.465 21:03:44 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:33.465 21:03:44 json_config -- json_config/common.sh@41 -- # kill -0 45884 00:04:33.465 21:03:44 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:33.465 21:03:44 json_config -- json_config/common.sh@43 -- # break 00:04:33.465 21:03:44 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:33.465 SPDK target shutdown done 00:04:33.465 21:03:44 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:33.465 INFO: relaunching applications... 00:04:33.465 21:03:44 json_config -- json_config/json_config.sh@371 -- # echo 'INFO: relaunching applications...' 00:04:33.465 21:03:44 json_config -- json_config/json_config.sh@372 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:33.465 21:03:44 json_config -- json_config/common.sh@9 -- # local app=target 00:04:33.465 21:03:44 json_config -- json_config/common.sh@10 -- # shift 00:04:33.465 21:03:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:33.465 21:03:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:33.465 21:03:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:33.465 21:03:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.465 21:03:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:33.465 21:03:45 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=46070 00:04:33.465 21:03:45 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:33.465 Waiting for target to run... 00:04:33.465 21:03:45 json_config -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:33.465 21:03:45 json_config -- json_config/common.sh@25 -- # waitforlisten 46070 /var/tmp/spdk_tgt.sock 00:04:33.465 21:03:45 json_config -- common/autotest_common.sh@829 -- # '[' -z 46070 ']' 00:04:33.465 21:03:45 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:33.465 21:03:45 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:33.465 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:33.465 21:03:45 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:33.465 21:03:45 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:33.465 21:03:45 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:33.465 [2024-07-14 21:03:45.008546] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:33.465 [2024-07-14 21:03:45.008822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:34.032 EAL: TSC is not safe to use in SMP mode 00:04:34.032 EAL: TSC is not invariant 00:04:34.032 [2024-07-14 21:03:45.293535] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.032 [2024-07-14 21:03:45.382387] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:34.032 [2024-07-14 21:03:45.384940] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.032 [2024-07-14 21:03:45.526575] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:04:34.032 [2024-07-14 21:03:45.526625] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Nvme0n1 00:04:34.032 [2024-07-14 21:03:45.534564] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:34.032 [2024-07-14 21:03:45.534597] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc0 00:04:34.032 [2024-07-14 21:03:45.542576] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:34.032 [2024-07-14 21:03:45.542611] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:04:34.032 [2024-07-14 21:03:45.542634] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:04:34.032 [2024-07-14 21:03:45.550577] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:04:34.291 [2024-07-14 21:03:45.623587] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:04:34.291 [2024-07-14 21:03:45.623636] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:34.291 [2024-07-14 21:03:45.623662] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x5bf78837780 00:04:34.291 [2024-07-14 21:03:45.623684] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:34.291 [2024-07-14 21:03:45.623788] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:34.291 [2024-07-14 21:03:45.623798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: PTBdevFromMalloc3 00:04:34.550 21:03:46 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:34.550 00:04:34.550 21:03:46 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:34.550 21:03:46 json_config -- json_config/common.sh@26 -- # echo '' 00:04:34.550 21:03:46 json_config -- json_config/json_config.sh@373 -- # [[ 0 -eq 1 ]] 00:04:34.550 INFO: Checking if target configuration is the same... 00:04:34.550 21:03:46 json_config -- json_config/json_config.sh@377 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:34.550 21:03:46 json_config -- json_config/json_config.sh@378 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.gHUFZJ /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:34.550 + '[' 2 -ne 2 ']' 00:04:34.550 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:34.550 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:34.550 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:34.550 +++ basename /tmp//sh-np.gHUFZJ 00:04:34.550 ++ mktemp /tmp/sh-np.gHUFZJ.XXX 00:04:34.550 + tmp_file_1=/tmp/sh-np.gHUFZJ.o8B 00:04:34.550 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:34.550 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:34.550 + tmp_file_2=/tmp/spdk_tgt_config.json.mPm 00:04:34.550 + ret=0 00:04:34.550 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:34.550 21:03:46 json_config -- json_config/json_config.sh@378 -- # tgt_rpc save_config 00:04:34.550 21:03:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.117 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:35.117 + diff -u /tmp/sh-np.gHUFZJ.o8B /tmp/spdk_tgt_config.json.mPm 00:04:35.117 + echo 'INFO: JSON config files are the same' 00:04:35.117 INFO: JSON config files are the same 00:04:35.117 + rm /tmp/sh-np.gHUFZJ.o8B /tmp/spdk_tgt_config.json.mPm 00:04:35.117 + exit 0 00:04:35.117 21:03:46 json_config -- json_config/json_config.sh@379 -- # [[ 0 -eq 1 ]] 00:04:35.117 INFO: changing configuration and checking if this can be detected... 00:04:35.117 21:03:46 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:35.117 21:03:46 json_config -- json_config/json_config.sh@386 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.117 21:03:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:35.427 21:03:46 json_config -- json_config/json_config.sh@387 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh /tmp//sh-np.8AlmFo /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:35.427 + '[' 2 -ne 2 ']' 00:04:35.427 +++ dirname /home/vagrant/spdk_repo/spdk/test/json_config/json_diff.sh 00:04:35.427 ++ readlink -f /home/vagrant/spdk_repo/spdk/test/json_config/../.. 00:04:35.427 + rootdir=/home/vagrant/spdk_repo/spdk 00:04:35.427 +++ basename /tmp//sh-np.8AlmFo 00:04:35.427 ++ mktemp /tmp/sh-np.8AlmFo.XXX 00:04:35.427 + tmp_file_1=/tmp/sh-np.8AlmFo.BDm 00:04:35.427 +++ basename /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:35.427 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:35.427 + tmp_file_2=/tmp/spdk_tgt_config.json.A9w 00:04:35.427 + ret=0 00:04:35.427 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:35.427 21:03:46 json_config -- json_config/json_config.sh@387 -- # tgt_rpc save_config 00:04:35.427 21:03:46 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:35.686 + /home/vagrant/spdk_repo/spdk/test/json_config/config_filter.py -method sort 00:04:35.944 + diff -u /tmp/sh-np.8AlmFo.BDm /tmp/spdk_tgt_config.json.A9w 00:04:35.944 + ret=1 00:04:35.944 + echo '=== Start of file: /tmp/sh-np.8AlmFo.BDm ===' 00:04:35.944 + cat /tmp/sh-np.8AlmFo.BDm 00:04:35.944 + echo '=== End of file: /tmp/sh-np.8AlmFo.BDm ===' 00:04:35.944 + echo '' 00:04:35.944 + echo '=== Start of file: /tmp/spdk_tgt_config.json.A9w ===' 00:04:35.944 + cat /tmp/spdk_tgt_config.json.A9w 00:04:35.945 + echo '=== End of file: /tmp/spdk_tgt_config.json.A9w ===' 00:04:35.945 + echo '' 00:04:35.945 + rm /tmp/sh-np.8AlmFo.BDm /tmp/spdk_tgt_config.json.A9w 00:04:35.945 + exit 1 00:04:35.945 INFO: configuration change detected. 00:04:35.945 21:03:47 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: configuration change detected.' 00:04:35.945 21:03:47 json_config -- json_config/json_config.sh@394 -- # json_config_test_fini 00:04:35.945 21:03:47 json_config -- json_config/json_config.sh@306 -- # timing_enter json_config_test_fini 00:04:35.945 21:03:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.945 21:03:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.945 21:03:47 json_config -- json_config/json_config.sh@307 -- # local ret=0 00:04:35.945 21:03:47 json_config -- json_config/json_config.sh@309 -- # [[ -n '' ]] 00:04:35.945 21:03:47 json_config -- json_config/json_config.sh@317 -- # [[ -n 46070 ]] 00:04:35.945 21:03:47 json_config -- json_config/json_config.sh@320 -- # cleanup_bdev_subsystem_config 00:04:35.945 21:03:47 json_config -- json_config/json_config.sh@184 -- # timing_enter cleanup_bdev_subsystem_config 00:04:35.945 21:03:47 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:35.945 21:03:47 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.945 21:03:47 json_config -- json_config/json_config.sh@186 -- # [[ 1 -eq 1 ]] 00:04:35.945 21:03:47 json_config -- json_config/json_config.sh@187 -- # tgt_rpc bdev_lvol_delete lvs_test/clone0 00:04:35.945 21:03:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/clone0 00:04:36.203 21:03:47 json_config -- json_config/json_config.sh@188 -- # tgt_rpc bdev_lvol_delete lvs_test/lvol0 00:04:36.203 21:03:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/lvol0 00:04:36.462 21:03:47 json_config -- json_config/json_config.sh@189 -- # tgt_rpc bdev_lvol_delete lvs_test/snapshot0 00:04:36.462 21:03:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete lvs_test/snapshot0 00:04:36.462 21:03:47 json_config -- json_config/json_config.sh@190 -- # tgt_rpc bdev_lvol_delete_lvstore -l lvs_test 00:04:36.462 21:03:47 json_config -- json_config/common.sh@57 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_lvol_delete_lvstore -l lvs_test 00:04:36.721 21:03:48 json_config -- json_config/json_config.sh@193 -- # uname -s 00:04:36.721 21:03:48 json_config -- json_config/json_config.sh@193 -- # [[ FreeBSD = Linux ]] 00:04:36.721 21:03:48 json_config -- json_config/json_config.sh@197 -- # [[ 0 -eq 1 ]] 00:04:36.721 21:03:48 json_config -- json_config/json_config.sh@201 -- # timing_exit cleanup_bdev_subsystem_config 00:04:36.721 21:03:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.721 21:03:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.721 21:03:48 json_config -- json_config/json_config.sh@323 -- # killprocess 46070 00:04:36.721 21:03:48 json_config -- common/autotest_common.sh@948 -- # '[' -z 46070 ']' 00:04:36.721 21:03:48 json_config -- common/autotest_common.sh@952 -- # kill -0 46070 00:04:36.721 21:03:48 json_config -- common/autotest_common.sh@953 -- # uname 00:04:36.721 21:03:48 json_config -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:36.721 21:03:48 json_config -- common/autotest_common.sh@956 -- # ps -c -o command 46070 00:04:36.721 21:03:48 json_config -- common/autotest_common.sh@956 -- # tail -1 00:04:36.721 21:03:48 json_config -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:36.721 21:03:48 json_config -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:36.721 21:03:48 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46070' 00:04:36.721 killing process with pid 46070 00:04:36.721 21:03:48 json_config -- common/autotest_common.sh@967 -- # kill 46070 00:04:36.721 21:03:48 json_config -- common/autotest_common.sh@972 -- # wait 46070 00:04:36.979 21:03:48 json_config -- json_config/json_config.sh@326 -- # rm -f /home/vagrant/spdk_repo/spdk/spdk_initiator_config.json /home/vagrant/spdk_repo/spdk/spdk_tgt_config.json 00:04:36.979 21:03:48 json_config -- json_config/json_config.sh@327 -- # timing_exit json_config_test_fini 00:04:36.980 21:03:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:36.980 21:03:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.980 21:03:48 json_config -- json_config/json_config.sh@328 -- # return 0 00:04:36.980 INFO: Success 00:04:36.980 21:03:48 json_config -- json_config/json_config.sh@396 -- # echo 'INFO: Success' 00:04:36.980 00:04:36.980 real 0m11.624s 00:04:36.980 user 0m18.462s 00:04:36.980 sys 0m1.987s 00:04:36.980 21:03:48 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:36.980 ************************************ 00:04:36.980 END TEST json_config 00:04:36.980 ************************************ 00:04:36.980 21:03:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.238 21:03:48 -- common/autotest_common.sh@1142 -- # return 0 00:04:37.238 21:03:48 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:37.238 21:03:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:37.238 21:03:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:37.238 21:03:48 -- common/autotest_common.sh@10 -- # set +x 00:04:37.238 ************************************ 00:04:37.238 START TEST json_config_extra_key 00:04:37.238 ************************************ 00:04:37.238 21:03:48 json_config_extra_key -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:37.238 21:03:48 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:37.238 21:03:48 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:37.238 21:03:48 json_config_extra_key -- nvmf/common.sh@7 -- # [[ FreeBSD == FreeBSD ]] 00:04:37.238 21:03:48 json_config_extra_key -- nvmf/common.sh@7 -- # return 0 00:04:37.238 21:03:48 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:37.238 21:03:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:37.238 21:03:48 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:37.238 21:03:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:37.238 21:03:48 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:37.238 21:03:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:37.238 21:03:48 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:37.238 21:03:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:37.238 21:03:48 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:37.238 21:03:48 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:37.238 INFO: launching applications... 00:04:37.238 21:03:48 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:37.238 21:03:48 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:37.238 21:03:48 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:37.238 21:03:48 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:37.238 21:03:48 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:37.238 21:03:48 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:37.238 21:03:48 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:37.238 21:03:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.238 21:03:48 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:37.238 21:03:48 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=46203 00:04:37.238 Waiting for target to run... 00:04:37.238 21:03:48 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:37.238 21:03:48 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 46203 /var/tmp/spdk_tgt.sock 00:04:37.238 21:03:48 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:37.238 21:03:48 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 46203 ']' 00:04:37.238 21:03:48 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:37.238 21:03:48 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:37.238 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:37.238 21:03:48 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:37.238 21:03:48 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:37.238 21:03:48 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:37.238 [2024-07-14 21:03:48.694842] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:37.238 [2024-07-14 21:03:48.695017] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:37.498 EAL: TSC is not safe to use in SMP mode 00:04:37.498 EAL: TSC is not invariant 00:04:37.498 [2024-07-14 21:03:49.001543] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.757 [2024-07-14 21:03:49.076572] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:37.757 [2024-07-14 21:03:49.078844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.324 21:03:49 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:38.324 21:03:49 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:04:38.324 00:04:38.324 21:03:49 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:38.324 INFO: shutting down applications... 00:04:38.324 21:03:49 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:38.324 21:03:49 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:38.324 21:03:49 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:38.324 21:03:49 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:38.324 21:03:49 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 46203 ]] 00:04:38.324 21:03:49 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 46203 00:04:38.324 21:03:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:38.324 21:03:49 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.324 21:03:49 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46203 00:04:38.324 21:03:49 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.890 21:03:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.890 21:03:50 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.890 21:03:50 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 46203 00:04:38.890 21:03:50 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:38.890 21:03:50 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:38.890 21:03:50 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:38.890 SPDK target shutdown done 00:04:38.890 21:03:50 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:38.890 Success 00:04:38.890 21:03:50 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:38.890 00:04:38.890 real 0m1.699s 00:04:38.890 user 0m1.415s 00:04:38.890 sys 0m0.521s 00:04:38.890 21:03:50 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:38.890 21:03:50 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:38.890 ************************************ 00:04:38.890 END TEST json_config_extra_key 00:04:38.890 ************************************ 00:04:38.890 21:03:50 -- common/autotest_common.sh@1142 -- # return 0 00:04:38.890 21:03:50 -- spdk/autotest.sh@174 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:38.890 21:03:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:38.890 21:03:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:38.890 21:03:50 -- common/autotest_common.sh@10 -- # set +x 00:04:38.890 ************************************ 00:04:38.890 START TEST alias_rpc 00:04:38.890 ************************************ 00:04:38.890 21:03:50 alias_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:38.890 * Looking for test storage... 00:04:38.890 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:38.890 21:03:50 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:38.890 21:03:50 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:38.890 21:03:50 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=46261 00:04:38.890 21:03:50 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 46261 00:04:38.890 21:03:50 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 46261 ']' 00:04:38.891 21:03:50 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:38.891 21:03:50 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:38.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:38.891 21:03:50 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:38.891 21:03:50 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:38.891 21:03:50 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:38.891 [2024-07-14 21:03:50.424062] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:38.891 [2024-07-14 21:03:50.424267] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:39.457 EAL: TSC is not safe to use in SMP mode 00:04:39.457 EAL: TSC is not invariant 00:04:39.457 [2024-07-14 21:03:50.984828] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.715 [2024-07-14 21:03:51.074228] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:39.715 [2024-07-14 21:03:51.076593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:39.973 21:03:51 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:39.973 21:03:51 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:04:39.973 21:03:51 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:40.232 21:03:51 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 46261 00:04:40.232 21:03:51 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 46261 ']' 00:04:40.232 21:03:51 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 46261 00:04:40.232 21:03:51 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:04:40.232 21:03:51 alias_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:40.232 21:03:51 alias_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 46261 00:04:40.232 21:03:51 alias_rpc -- common/autotest_common.sh@956 -- # tail -1 00:04:40.232 21:03:51 alias_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:40.232 killing process with pid 46261 00:04:40.232 21:03:51 alias_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:40.232 21:03:51 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46261' 00:04:40.232 21:03:51 alias_rpc -- common/autotest_common.sh@967 -- # kill 46261 00:04:40.232 21:03:51 alias_rpc -- common/autotest_common.sh@972 -- # wait 46261 00:04:40.490 00:04:40.490 real 0m1.712s 00:04:40.490 user 0m1.748s 00:04:40.490 sys 0m0.791s 00:04:40.490 21:03:51 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.490 21:03:51 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.490 ************************************ 00:04:40.490 END TEST alias_rpc 00:04:40.490 ************************************ 00:04:40.490 21:03:52 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.490 21:03:52 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:40.490 21:03:52 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:40.490 21:03:52 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.490 21:03:52 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.490 21:03:52 -- common/autotest_common.sh@10 -- # set +x 00:04:40.748 ************************************ 00:04:40.748 START TEST spdkcli_tcp 00:04:40.748 ************************************ 00:04:40.748 21:03:52 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:40.748 * Looking for test storage... 00:04:40.748 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:40.748 21:03:52 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:40.748 21:03:52 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:40.748 21:03:52 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:40.748 21:03:52 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:40.748 21:03:52 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:40.748 21:03:52 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:40.748 21:03:52 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:40.748 21:03:52 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:40.748 21:03:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.748 21:03:52 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=46322 00:04:40.748 21:03:52 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 46322 00:04:40.748 21:03:52 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 46322 ']' 00:04:40.748 21:03:52 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.748 21:03:52 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.748 21:03:52 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:40.748 21:03:52 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.748 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.748 21:03:52 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.748 21:03:52 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:40.748 [2024-07-14 21:03:52.210275] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:40.748 [2024-07-14 21:03:52.210408] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:41.315 EAL: TSC is not safe to use in SMP mode 00:04:41.315 EAL: TSC is not invariant 00:04:41.315 [2024-07-14 21:03:52.709257] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:41.315 [2024-07-14 21:03:52.780250] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:41.315 [2024-07-14 21:03:52.780293] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:41.315 [2024-07-14 21:03:52.783219] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.315 [2024-07-14 21:03:52.783214] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:41.879 21:03:53 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.879 21:03:53 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:04:41.879 21:03:53 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=46330 00:04:41.879 21:03:53 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:41.879 21:03:53 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:42.137 [ 00:04:42.137 "spdk_get_version", 00:04:42.137 "rpc_get_methods", 00:04:42.137 "env_dpdk_get_mem_stats", 00:04:42.137 "trace_get_info", 00:04:42.137 "trace_get_tpoint_group_mask", 00:04:42.137 "trace_disable_tpoint_group", 00:04:42.137 "trace_enable_tpoint_group", 00:04:42.137 "trace_clear_tpoint_mask", 00:04:42.137 "trace_set_tpoint_mask", 00:04:42.137 "notify_get_notifications", 00:04:42.137 "notify_get_types", 00:04:42.137 "accel_get_stats", 00:04:42.137 "accel_set_options", 00:04:42.137 "accel_set_driver", 00:04:42.137 "accel_crypto_key_destroy", 00:04:42.137 "accel_crypto_keys_get", 00:04:42.137 "accel_crypto_key_create", 00:04:42.137 "accel_assign_opc", 00:04:42.137 "accel_get_module_info", 00:04:42.137 "accel_get_opc_assignments", 00:04:42.137 "bdev_get_histogram", 00:04:42.137 "bdev_enable_histogram", 00:04:42.137 "bdev_set_qos_limit", 00:04:42.137 "bdev_set_qd_sampling_period", 00:04:42.137 "bdev_get_bdevs", 00:04:42.137 "bdev_reset_iostat", 00:04:42.137 "bdev_get_iostat", 00:04:42.137 "bdev_examine", 00:04:42.137 "bdev_wait_for_examine", 00:04:42.137 "bdev_set_options", 00:04:42.137 "keyring_get_keys", 00:04:42.137 "framework_get_pci_devices", 00:04:42.137 "framework_get_config", 00:04:42.137 "framework_get_subsystems", 00:04:42.137 "sock_get_default_impl", 00:04:42.137 "sock_set_default_impl", 00:04:42.137 "sock_impl_set_options", 00:04:42.137 "sock_impl_get_options", 00:04:42.137 "thread_set_cpumask", 00:04:42.137 "framework_get_governor", 00:04:42.137 "framework_get_scheduler", 00:04:42.137 "framework_set_scheduler", 00:04:42.137 "framework_get_reactors", 00:04:42.137 "thread_get_io_channels", 00:04:42.137 "thread_get_pollers", 00:04:42.137 "thread_get_stats", 00:04:42.137 "framework_monitor_context_switch", 00:04:42.137 "spdk_kill_instance", 00:04:42.137 "log_enable_timestamps", 00:04:42.137 "log_get_flags", 00:04:42.137 "log_clear_flag", 00:04:42.137 "log_set_flag", 00:04:42.137 "log_get_level", 00:04:42.137 "log_set_level", 00:04:42.137 "log_get_print_level", 00:04:42.137 "log_set_print_level", 00:04:42.137 "framework_enable_cpumask_locks", 00:04:42.137 "framework_disable_cpumask_locks", 00:04:42.137 "framework_wait_init", 00:04:42.137 "framework_start_init", 00:04:42.137 "iobuf_get_stats", 00:04:42.137 "iobuf_set_options", 00:04:42.137 "vmd_rescan", 00:04:42.137 "vmd_remove_device", 00:04:42.137 "vmd_enable", 00:04:42.137 "nvmf_stop_mdns_prr", 00:04:42.137 "nvmf_publish_mdns_prr", 00:04:42.137 "nvmf_subsystem_get_listeners", 00:04:42.137 "nvmf_subsystem_get_qpairs", 00:04:42.137 "nvmf_subsystem_get_controllers", 00:04:42.137 "nvmf_get_stats", 00:04:42.137 "nvmf_get_transports", 00:04:42.137 "nvmf_create_transport", 00:04:42.137 "nvmf_get_targets", 00:04:42.137 "nvmf_delete_target", 00:04:42.137 "nvmf_create_target", 00:04:42.137 "nvmf_subsystem_allow_any_host", 00:04:42.137 "nvmf_subsystem_remove_host", 00:04:42.137 "nvmf_subsystem_add_host", 00:04:42.137 "nvmf_ns_remove_host", 00:04:42.137 "nvmf_ns_add_host", 00:04:42.137 "nvmf_subsystem_remove_ns", 00:04:42.137 "nvmf_subsystem_add_ns", 00:04:42.137 "nvmf_subsystem_listener_set_ana_state", 00:04:42.137 "nvmf_discovery_get_referrals", 00:04:42.137 "nvmf_discovery_remove_referral", 00:04:42.137 "nvmf_discovery_add_referral", 00:04:42.137 "nvmf_subsystem_remove_listener", 00:04:42.137 "nvmf_subsystem_add_listener", 00:04:42.137 "nvmf_delete_subsystem", 00:04:42.137 "nvmf_create_subsystem", 00:04:42.137 "nvmf_get_subsystems", 00:04:42.137 "nvmf_set_crdt", 00:04:42.137 "nvmf_set_config", 00:04:42.137 "nvmf_set_max_subsystems", 00:04:42.137 "scsi_get_devices", 00:04:42.137 "iscsi_get_histogram", 00:04:42.137 "iscsi_enable_histogram", 00:04:42.137 "iscsi_set_options", 00:04:42.137 "iscsi_get_auth_groups", 00:04:42.137 "iscsi_auth_group_remove_secret", 00:04:42.137 "iscsi_auth_group_add_secret", 00:04:42.137 "iscsi_delete_auth_group", 00:04:42.137 "iscsi_create_auth_group", 00:04:42.137 "iscsi_set_discovery_auth", 00:04:42.137 "iscsi_get_options", 00:04:42.137 "iscsi_target_node_request_logout", 00:04:42.137 "iscsi_target_node_set_redirect", 00:04:42.137 "iscsi_target_node_set_auth", 00:04:42.137 "iscsi_target_node_add_lun", 00:04:42.137 "iscsi_get_stats", 00:04:42.137 "iscsi_get_connections", 00:04:42.137 "iscsi_portal_group_set_auth", 00:04:42.137 "iscsi_start_portal_group", 00:04:42.137 "iscsi_delete_portal_group", 00:04:42.137 "iscsi_create_portal_group", 00:04:42.137 "iscsi_get_portal_groups", 00:04:42.137 "iscsi_delete_target_node", 00:04:42.137 "iscsi_target_node_remove_pg_ig_maps", 00:04:42.137 "iscsi_target_node_add_pg_ig_maps", 00:04:42.137 "iscsi_create_target_node", 00:04:42.137 "iscsi_get_target_nodes", 00:04:42.137 "iscsi_delete_initiator_group", 00:04:42.137 "iscsi_initiator_group_remove_initiators", 00:04:42.137 "iscsi_initiator_group_add_initiators", 00:04:42.137 "iscsi_create_initiator_group", 00:04:42.137 "iscsi_get_initiator_groups", 00:04:42.137 "keyring_file_remove_key", 00:04:42.137 "keyring_file_add_key", 00:04:42.137 "iaa_scan_accel_module", 00:04:42.137 "dsa_scan_accel_module", 00:04:42.137 "ioat_scan_accel_module", 00:04:42.137 "accel_error_inject_error", 00:04:42.137 "bdev_aio_delete", 00:04:42.137 "bdev_aio_rescan", 00:04:42.137 "bdev_aio_create", 00:04:42.137 "blobfs_create", 00:04:42.137 "blobfs_detect", 00:04:42.137 "blobfs_set_cache_size", 00:04:42.137 "bdev_zone_block_delete", 00:04:42.137 "bdev_zone_block_create", 00:04:42.137 "bdev_delay_delete", 00:04:42.137 "bdev_delay_create", 00:04:42.137 "bdev_delay_update_latency", 00:04:42.137 "bdev_split_delete", 00:04:42.137 "bdev_split_create", 00:04:42.137 "bdev_error_inject_error", 00:04:42.137 "bdev_error_delete", 00:04:42.137 "bdev_error_create", 00:04:42.137 "bdev_raid_set_options", 00:04:42.137 "bdev_raid_remove_base_bdev", 00:04:42.137 "bdev_raid_add_base_bdev", 00:04:42.137 "bdev_raid_delete", 00:04:42.137 "bdev_raid_create", 00:04:42.137 "bdev_raid_get_bdevs", 00:04:42.137 "bdev_lvol_set_parent_bdev", 00:04:42.137 "bdev_lvol_set_parent", 00:04:42.137 "bdev_lvol_check_shallow_copy", 00:04:42.137 "bdev_lvol_start_shallow_copy", 00:04:42.137 "bdev_lvol_grow_lvstore", 00:04:42.137 "bdev_lvol_get_lvols", 00:04:42.137 "bdev_lvol_get_lvstores", 00:04:42.137 "bdev_lvol_delete", 00:04:42.137 "bdev_lvol_set_read_only", 00:04:42.137 "bdev_lvol_resize", 00:04:42.137 "bdev_lvol_decouple_parent", 00:04:42.137 "bdev_lvol_inflate", 00:04:42.137 "bdev_lvol_rename", 00:04:42.137 "bdev_lvol_clone_bdev", 00:04:42.137 "bdev_lvol_clone", 00:04:42.137 "bdev_lvol_snapshot", 00:04:42.138 "bdev_lvol_create", 00:04:42.138 "bdev_lvol_delete_lvstore", 00:04:42.138 "bdev_lvol_rename_lvstore", 00:04:42.138 "bdev_lvol_create_lvstore", 00:04:42.138 "bdev_passthru_delete", 00:04:42.138 "bdev_passthru_create", 00:04:42.138 "bdev_nvme_send_cmd", 00:04:42.138 "bdev_nvme_get_path_iostat", 00:04:42.138 "bdev_nvme_get_mdns_discovery_info", 00:04:42.138 "bdev_nvme_stop_mdns_discovery", 00:04:42.138 "bdev_nvme_start_mdns_discovery", 00:04:42.138 "bdev_nvme_set_multipath_policy", 00:04:42.138 "bdev_nvme_set_preferred_path", 00:04:42.138 "bdev_nvme_get_io_paths", 00:04:42.138 "bdev_nvme_remove_error_injection", 00:04:42.138 "bdev_nvme_add_error_injection", 00:04:42.138 "bdev_nvme_get_discovery_info", 00:04:42.138 "bdev_nvme_stop_discovery", 00:04:42.138 "bdev_nvme_start_discovery", 00:04:42.138 "bdev_nvme_get_controller_health_info", 00:04:42.138 "bdev_nvme_disable_controller", 00:04:42.138 "bdev_nvme_enable_controller", 00:04:42.138 "bdev_nvme_reset_controller", 00:04:42.138 "bdev_nvme_get_transport_statistics", 00:04:42.138 "bdev_nvme_apply_firmware", 00:04:42.138 "bdev_nvme_detach_controller", 00:04:42.138 "bdev_nvme_get_controllers", 00:04:42.138 "bdev_nvme_attach_controller", 00:04:42.138 "bdev_nvme_set_hotplug", 00:04:42.138 "bdev_nvme_set_options", 00:04:42.138 "bdev_null_resize", 00:04:42.138 "bdev_null_delete", 00:04:42.138 "bdev_null_create", 00:04:42.138 "bdev_malloc_delete", 00:04:42.138 "bdev_malloc_create" 00:04:42.138 ] 00:04:42.138 21:03:53 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:42.138 21:03:53 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:42.138 21:03:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.138 21:03:53 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:42.138 21:03:53 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 46322 00:04:42.138 21:03:53 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 46322 ']' 00:04:42.138 21:03:53 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 46322 00:04:42.138 21:03:53 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:04:42.138 21:03:53 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:42.138 21:03:53 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps -c -o command 46322 00:04:42.138 21:03:53 spdkcli_tcp -- common/autotest_common.sh@956 -- # tail -1 00:04:42.138 21:03:53 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:42.138 killing process with pid 46322 00:04:42.138 21:03:53 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:42.138 21:03:53 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46322' 00:04:42.138 21:03:53 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 46322 00:04:42.138 21:03:53 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 46322 00:04:42.395 00:04:42.395 real 0m1.707s 00:04:42.395 user 0m2.645s 00:04:42.395 sys 0m0.749s 00:04:42.395 ************************************ 00:04:42.395 END TEST spdkcli_tcp 00:04:42.395 ************************************ 00:04:42.395 21:03:53 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.395 21:03:53 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.395 21:03:53 -- common/autotest_common.sh@1142 -- # return 0 00:04:42.395 21:03:53 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.395 21:03:53 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.395 21:03:53 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.395 21:03:53 -- common/autotest_common.sh@10 -- # set +x 00:04:42.395 ************************************ 00:04:42.395 START TEST dpdk_mem_utility 00:04:42.395 ************************************ 00:04:42.395 21:03:53 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:42.653 * Looking for test storage... 00:04:42.653 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:42.653 21:03:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:42.653 21:03:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=46401 00:04:42.653 21:03:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 46401 00:04:42.653 21:03:53 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:42.653 21:03:53 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 46401 ']' 00:04:42.653 21:03:53 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.653 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.653 21:03:53 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:42.653 21:03:53 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.653 21:03:53 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:42.653 21:03:53 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:42.653 [2024-07-14 21:03:53.958531] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:42.653 [2024-07-14 21:03:53.958799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:42.933 EAL: TSC is not safe to use in SMP mode 00:04:42.933 EAL: TSC is not invariant 00:04:42.933 [2024-07-14 21:03:54.447617] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:43.199 [2024-07-14 21:03:54.523854] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:43.199 [2024-07-14 21:03:54.526129] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.765 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:43.765 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:04:43.765 21:03:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:43.765 21:03:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:43.765 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:43.765 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:43.765 { 00:04:43.765 "filename": "/tmp/spdk_mem_dump.txt" 00:04:43.765 } 00:04:43.765 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:43.765 21:03:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:43.765 DPDK memory size 2048.000000 MiB in 1 heap(s) 00:04:43.765 1 heaps totaling size 2048.000000 MiB 00:04:43.765 size: 2048.000000 MiB heap id: 0 00:04:43.765 end heaps---------- 00:04:43.765 8 mempools totaling size 592.563660 MiB 00:04:43.765 size: 212.271240 MiB name: PDU_immediate_data_Pool 00:04:43.765 size: 153.489014 MiB name: PDU_data_out_Pool 00:04:43.765 size: 84.500549 MiB name: bdev_io_46401 00:04:43.765 size: 51.008362 MiB name: evtpool_46401 00:04:43.765 size: 50.000549 MiB name: msgpool_46401 00:04:43.765 size: 21.758911 MiB name: PDU_Pool 00:04:43.765 size: 19.508911 MiB name: SCSI_TASK_Pool 00:04:43.765 size: 0.026123 MiB name: Session_Pool 00:04:43.765 end mempools------- 00:04:43.765 6 memzones totaling size 4.142822 MiB 00:04:43.765 size: 1.000366 MiB name: RG_ring_0_46401 00:04:43.765 size: 1.000366 MiB name: RG_ring_1_46401 00:04:43.765 size: 1.000366 MiB name: RG_ring_4_46401 00:04:43.765 size: 1.000366 MiB name: RG_ring_5_46401 00:04:43.765 size: 0.125366 MiB name: RG_ring_2_46401 00:04:43.765 size: 0.015991 MiB name: RG_ring_3_46401 00:04:43.765 end memzones------- 00:04:43.765 21:03:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:43.765 heap id: 0 total size: 2048.000000 MiB number of busy elements: 39 number of free elements: 3 00:04:43.765 list of free elements. size: 1254.072021 MiB 00:04:43.765 element at address: 0x1060000000 with size: 1253.760681 MiB 00:04:43.765 element at address: 0x10e0000000 with size: 0.180054 MiB 00:04:43.765 element at address: 0x10e0400000 with size: 0.131287 MiB 00:04:43.765 list of standard malloc elements. size: 197.217834 MiB 00:04:43.765 element at address: 0x10c7bfff80 with size: 132.000122 MiB 00:04:43.765 element at address: 0x10e58b5f80 with size: 64.000122 MiB 00:04:43.765 element at address: 0x10e02fff80 with size: 1.000122 MiB 00:04:43.765 element at address: 0x10effd9f00 with size: 0.140747 MiB 00:04:43.765 element at address: 0x10e0421a80 with size: 0.062622 MiB 00:04:43.765 element at address: 0x10efffdf80 with size: 0.007935 MiB 00:04:43.765 element at address: 0x10e98b6480 with size: 0.000305 MiB 00:04:43.765 element at address: 0x10e002e180 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e002e240 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e002e300 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e002e3c0 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e002e480 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e0035080 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e0035280 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e0035340 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e003d600 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e003d6c0 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e003d780 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e04219c0 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e98b6000 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e98b60c0 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e98b6180 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e98b6240 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e98b6300 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e98b63c0 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e98b65c0 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e98b6680 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e98b6880 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e98b6940 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e98d6c00 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e98d6cc0 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e99d6f80 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e9ad7240 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10e9ad7300 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10eccd7640 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10eccd7840 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10eccd7900 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10efed7c40 with size: 0.000183 MiB 00:04:43.765 element at address: 0x10effd9e40 with size: 0.000183 MiB 00:04:43.765 list of memzone associated elements. size: 596.710144 MiB 00:04:43.765 element at address: 0x10b93ba640 with size: 211.013000 MiB 00:04:43.765 associated memzone info: size: 211.012878 MiB name: MP_PDU_immediate_data_Pool_0 00:04:43.765 element at address: 0x10afa453c0 with size: 152.449524 MiB 00:04:43.765 associated memzone info: size: 152.449402 MiB name: MP_PDU_data_out_Pool_0 00:04:43.765 element at address: 0x10e0431b00 with size: 84.000122 MiB 00:04:43.765 associated memzone info: size: 84.000000 MiB name: MP_bdev_io_46401_0 00:04:43.765 element at address: 0x10eccd79c0 with size: 48.000122 MiB 00:04:43.766 associated memzone info: size: 48.000000 MiB name: MP_evtpool_46401_0 00:04:43.766 element at address: 0x10e9ad73c0 with size: 48.000122 MiB 00:04:43.766 associated memzone info: size: 48.000000 MiB name: MP_msgpool_46401_0 00:04:43.766 element at address: 0x10c67bfcc0 with size: 20.250671 MiB 00:04:43.766 associated memzone info: size: 20.250549 MiB name: MP_PDU_Pool_0 00:04:43.766 element at address: 0x10ae6c2dc0 with size: 18.000671 MiB 00:04:43.766 associated memzone info: size: 18.000549 MiB name: MP_SCSI_TASK_Pool_0 00:04:43.766 element at address: 0x10efcd7a40 with size: 2.000488 MiB 00:04:43.766 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_46401 00:04:43.766 element at address: 0x10ecad7440 with size: 2.000488 MiB 00:04:43.766 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_46401 00:04:43.766 element at address: 0x10efed7d00 with size: 1.008118 MiB 00:04:43.766 associated memzone info: size: 1.007996 MiB name: MP_evtpool_46401 00:04:43.766 element at address: 0x10e00fdc40 with size: 1.008118 MiB 00:04:43.766 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:43.766 element at address: 0x10c66bdb80 with size: 1.008118 MiB 00:04:43.766 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:43.766 element at address: 0x10b92b8500 with size: 1.008118 MiB 00:04:43.766 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:43.766 element at address: 0x10af943280 with size: 1.008118 MiB 00:04:43.766 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:43.766 element at address: 0x10e99d7040 with size: 1.000488 MiB 00:04:43.766 associated memzone info: size: 1.000366 MiB name: RG_ring_0_46401 00:04:43.766 element at address: 0x10e98d6d80 with size: 1.000488 MiB 00:04:43.766 associated memzone info: size: 1.000366 MiB name: RG_ring_1_46401 00:04:43.766 element at address: 0x10e01ffd80 with size: 1.000488 MiB 00:04:43.766 associated memzone info: size: 1.000366 MiB name: RG_ring_4_46401 00:04:43.766 element at address: 0x10ae5c2bc0 with size: 1.000488 MiB 00:04:43.766 associated memzone info: size: 1.000366 MiB name: RG_ring_5_46401 00:04:43.766 element at address: 0x10e5831b80 with size: 0.500488 MiB 00:04:43.766 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_46401 00:04:43.766 element at address: 0x10e007da40 with size: 0.500488 MiB 00:04:43.766 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:43.766 element at address: 0x10af8c3080 with size: 0.500488 MiB 00:04:43.766 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:43.766 element at address: 0x10e003d840 with size: 0.250488 MiB 00:04:43.766 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:43.766 element at address: 0x10e98b6a00 with size: 0.125488 MiB 00:04:43.766 associated memzone info: size: 0.125366 MiB name: RG_ring_2_46401 00:04:43.766 element at address: 0x10e0035400 with size: 0.031738 MiB 00:04:43.766 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:43.766 element at address: 0x10e002e540 with size: 0.023743 MiB 00:04:43.766 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:43.766 element at address: 0x10e58b1d80 with size: 0.016113 MiB 00:04:43.766 associated memzone info: size: 0.015991 MiB name: RG_ring_3_46401 00:04:43.766 element at address: 0x10e0034680 with size: 0.002441 MiB 00:04:43.766 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:43.766 element at address: 0x10eccd7700 with size: 0.000305 MiB 00:04:43.766 associated memzone info: size: 0.000183 MiB name: MP_msgpool_46401 00:04:43.766 element at address: 0x10e98b6740 with size: 0.000305 MiB 00:04:43.766 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_46401 00:04:43.766 element at address: 0x10e0035140 with size: 0.000305 MiB 00:04:43.766 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:43.766 21:03:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:43.766 21:03:55 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 46401 00:04:43.766 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 46401 ']' 00:04:43.766 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 46401 00:04:43.766 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:04:43.766 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:43.766 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps -c -o command 46401 00:04:43.766 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@956 -- # tail -1 00:04:43.766 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:43.766 killing process with pid 46401 00:04:43.766 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:43.766 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46401' 00:04:43.766 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 46401 00:04:43.766 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 46401 00:04:44.024 00:04:44.024 real 0m1.664s 00:04:44.024 user 0m1.748s 00:04:44.024 sys 0m0.668s 00:04:44.024 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:44.024 ************************************ 00:04:44.024 END TEST dpdk_mem_utility 00:04:44.024 ************************************ 00:04:44.024 21:03:55 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:44.024 21:03:55 -- common/autotest_common.sh@1142 -- # return 0 00:04:44.024 21:03:55 -- spdk/autotest.sh@181 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:44.024 21:03:55 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:44.024 21:03:55 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.024 21:03:55 -- common/autotest_common.sh@10 -- # set +x 00:04:44.024 ************************************ 00:04:44.024 START TEST event 00:04:44.024 ************************************ 00:04:44.024 21:03:55 event -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:44.282 * Looking for test storage... 00:04:44.282 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:44.282 21:03:55 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:44.282 21:03:55 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:44.282 21:03:55 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:44.282 21:03:55 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:04:44.282 21:03:55 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:44.282 21:03:55 event -- common/autotest_common.sh@10 -- # set +x 00:04:44.282 ************************************ 00:04:44.282 START TEST event_perf 00:04:44.282 ************************************ 00:04:44.282 21:03:55 event.event_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:44.282 Running I/O for 1 seconds...[2024-07-14 21:03:55.666336] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:44.282 [2024-07-14 21:03:55.666507] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:44.849 EAL: TSC is not safe to use in SMP mode 00:04:44.849 EAL: TSC is not invariant 00:04:44.849 [2024-07-14 21:03:56.217028] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:44.849 [2024-07-14 21:03:56.292616] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:44.849 [2024-07-14 21:03:56.292668] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:44.849 [2024-07-14 21:03:56.292676] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:04:44.849 [2024-07-14 21:03:56.292682] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:04:44.849 Running I/O for 1 seconds...[2024-07-14 21:03:56.296487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.849 [2024-07-14 21:03:56.296333] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:44.849 [2024-07-14 21:03:56.296411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:44.849 [2024-07-14 21:03:56.296480] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:46.221 00:04:46.221 lcore 0: 2512643 00:04:46.221 lcore 1: 2512641 00:04:46.221 lcore 2: 2512642 00:04:46.221 lcore 3: 2512643 00:04:46.221 done. 00:04:46.221 00:04:46.221 real 0m1.750s 00:04:46.221 user 0m4.137s 00:04:46.221 sys 0m0.604s 00:04:46.221 21:03:57 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:46.221 ************************************ 00:04:46.221 END TEST event_perf 00:04:46.221 ************************************ 00:04:46.221 21:03:57 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:46.221 21:03:57 event -- common/autotest_common.sh@1142 -- # return 0 00:04:46.221 21:03:57 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:46.221 21:03:57 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:46.221 21:03:57 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:46.221 21:03:57 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.221 ************************************ 00:04:46.221 START TEST event_reactor 00:04:46.221 ************************************ 00:04:46.221 21:03:57 event.event_reactor -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:46.221 [2024-07-14 21:03:57.462578] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:46.221 [2024-07-14 21:03:57.462762] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:46.478 EAL: TSC is not safe to use in SMP mode 00:04:46.478 EAL: TSC is not invariant 00:04:46.478 [2024-07-14 21:03:58.014364] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.736 [2024-07-14 21:03:58.100072] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:46.736 [2024-07-14 21:03:58.102269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:47.669 test_start 00:04:47.669 oneshot 00:04:47.669 tick 100 00:04:47.669 tick 100 00:04:47.669 tick 250 00:04:47.669 tick 100 00:04:47.669 tick 100 00:04:47.669 tick 100 00:04:47.669 tick 250 00:04:47.669 tick 500 00:04:47.669 tick 100 00:04:47.669 tick 100 00:04:47.669 tick 250 00:04:47.669 tick 100 00:04:47.669 tick 100 00:04:47.669 test_end 00:04:47.669 00:04:47.669 real 0m1.761s 00:04:47.669 user 0m1.177s 00:04:47.669 sys 0m0.581s 00:04:47.927 21:03:59 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:47.927 21:03:59 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:47.927 ************************************ 00:04:47.927 END TEST event_reactor 00:04:47.927 ************************************ 00:04:47.927 21:03:59 event -- common/autotest_common.sh@1142 -- # return 0 00:04:47.927 21:03:59 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:47.927 21:03:59 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:04:47.927 21:03:59 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:47.927 21:03:59 event -- common/autotest_common.sh@10 -- # set +x 00:04:47.927 ************************************ 00:04:47.927 START TEST event_reactor_perf 00:04:47.927 ************************************ 00:04:47.927 21:03:59 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:47.927 [2024-07-14 21:03:59.267877] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:47.927 [2024-07-14 21:03:59.268078] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:48.495 EAL: TSC is not safe to use in SMP mode 00:04:48.495 EAL: TSC is not invariant 00:04:48.495 [2024-07-14 21:03:59.806110] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.495 [2024-07-14 21:03:59.906961] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:48.495 [2024-07-14 21:03:59.909445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.870 test_start 00:04:49.870 test_end 00:04:49.870 Performance: 3661486 events per second 00:04:49.870 00:04:49.870 real 0m1.758s 00:04:49.870 user 0m1.189s 00:04:49.870 sys 0m0.566s 00:04:49.870 21:04:01 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.870 21:04:01 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.870 ************************************ 00:04:49.870 END TEST event_reactor_perf 00:04:49.870 ************************************ 00:04:49.870 21:04:01 event -- common/autotest_common.sh@1142 -- # return 0 00:04:49.870 21:04:01 event -- event/event.sh@49 -- # uname -s 00:04:49.870 21:04:01 event -- event/event.sh@49 -- # '[' FreeBSD = Linux ']' 00:04:49.870 00:04:49.870 real 0m5.566s 00:04:49.870 user 0m6.637s 00:04:49.870 sys 0m1.947s 00:04:49.870 21:04:01 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:49.870 ************************************ 00:04:49.870 END TEST event 00:04:49.870 ************************************ 00:04:49.870 21:04:01 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.870 21:04:01 -- common/autotest_common.sh@1142 -- # return 0 00:04:49.870 21:04:01 -- spdk/autotest.sh@182 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:49.870 21:04:01 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:49.870 21:04:01 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.870 21:04:01 -- common/autotest_common.sh@10 -- # set +x 00:04:49.870 ************************************ 00:04:49.870 START TEST thread 00:04:49.870 ************************************ 00:04:49.870 21:04:01 thread -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:04:49.870 * Looking for test storage... 00:04:49.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:04:49.870 21:04:01 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.870 21:04:01 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:49.870 21:04:01 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:49.870 21:04:01 thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.870 ************************************ 00:04:49.870 START TEST thread_poller_perf 00:04:49.870 ************************************ 00:04:49.870 21:04:01 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:49.870 [2024-07-14 21:04:01.269711] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:49.870 [2024-07-14 21:04:01.269872] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:50.437 EAL: TSC is not safe to use in SMP mode 00:04:50.437 EAL: TSC is not invariant 00:04:50.437 [2024-07-14 21:04:01.805740] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.437 [2024-07-14 21:04:01.895306] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:50.437 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:50.437 [2024-07-14 21:04:01.897739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.813 ====================================== 00:04:51.813 busy:2201882982 (cyc) 00:04:51.813 total_run_count: 6772000 00:04:51.813 tsc_hz: 2199999327 (cyc) 00:04:51.813 ====================================== 00:04:51.813 poller_cost: 325 (cyc), 147 (nsec) 00:04:51.813 00:04:51.813 real 0m1.745s 00:04:51.813 user 0m1.173s 00:04:51.813 sys 0m0.569s 00:04:51.813 21:04:03 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:51.813 21:04:03 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:51.813 ************************************ 00:04:51.813 END TEST thread_poller_perf 00:04:51.813 ************************************ 00:04:51.813 21:04:03 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:51.813 21:04:03 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:51.813 21:04:03 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:04:51.813 21:04:03 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:51.813 21:04:03 thread -- common/autotest_common.sh@10 -- # set +x 00:04:51.813 ************************************ 00:04:51.813 START TEST thread_poller_perf 00:04:51.813 ************************************ 00:04:51.813 21:04:03 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:51.813 [2024-07-14 21:04:03.060070] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:51.813 [2024-07-14 21:04:03.060268] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:52.070 EAL: TSC is not safe to use in SMP mode 00:04:52.070 EAL: TSC is not invariant 00:04:52.070 [2024-07-14 21:04:03.546210] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.328 [2024-07-14 21:04:03.625587] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:52.328 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:52.328 [2024-07-14 21:04:03.627864] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.263 ====================================== 00:04:53.263 busy:2200981936 (cyc) 00:04:53.263 total_run_count: 86747000 00:04:53.263 tsc_hz: 2199999327 (cyc) 00:04:53.263 ====================================== 00:04:53.263 poller_cost: 25 (cyc), 11 (nsec) 00:04:53.263 00:04:53.263 real 0m1.674s 00:04:53.263 user 0m1.168s 00:04:53.263 sys 0m0.505s 00:04:53.263 21:04:04 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:53.263 21:04:04 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:53.263 ************************************ 00:04:53.263 END TEST thread_poller_perf 00:04:53.263 ************************************ 00:04:53.263 21:04:04 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:53.263 21:04:04 thread -- thread/thread.sh@17 -- # [[ n != \y ]] 00:04:53.263 21:04:04 thread -- thread/thread.sh@18 -- # run_test thread_spdk_lock /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:53.263 21:04:04 thread -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:53.263 21:04:04 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:53.263 21:04:04 thread -- common/autotest_common.sh@10 -- # set +x 00:04:53.263 ************************************ 00:04:53.263 START TEST thread_spdk_lock 00:04:53.263 ************************************ 00:04:53.263 21:04:04 thread.thread_spdk_lock -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock 00:04:53.263 [2024-07-14 21:04:04.785212] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:53.263 [2024-07-14 21:04:04.785460] [ DPDK EAL parameters: spdk_lock_test --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:53.831 EAL: TSC is not safe to use in SMP mode 00:04:53.832 EAL: TSC is not invariant 00:04:53.832 [2024-07-14 21:04:05.306101] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.094 [2024-07-14 21:04:05.387754] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:54.094 [2024-07-14 21:04:05.387823] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:04:54.094 [2024-07-14 21:04:05.390519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.094 [2024-07-14 21:04:05.390513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.372 [2024-07-14 21:04:05.834519] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 965:thread_execute_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:54.372 [2024-07-14 21:04:05.834581] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3083:spdk_spin_lock: *ERROR*: unrecoverable spinlock error 2: Deadlock detected (thread != sspin->thread) 00:04:54.372 [2024-07-14 21:04:05.834606] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:3038:sspin_stacks_print: *ERROR*: spinlock 0x315b60 00:04:54.372 [2024-07-14 21:04:05.835106] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:54.372 [2024-07-14 21:04:05.835206] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c:1026:thread_execute_timed_poller: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:54.372 [2024-07-14 21:04:05.835215] /home/vagrant/spdk_repo/spdk/lib/thread/thread.c: 860:msg_queue_run_batch: *ERROR*: unrecoverable spinlock error 7: Lock(s) held while SPDK thread going off CPU (thread->lock_count == 0) 00:04:54.630 Starting test contend 00:04:54.630 Worker Delay Wait us Hold us Total us 00:04:54.630 0 3 265680 165352 431032 00:04:54.630 1 5 162313 268198 430511 00:04:54.630 PASS test contend 00:04:54.630 Starting test hold_by_poller 00:04:54.630 PASS test hold_by_poller 00:04:54.630 Starting test hold_by_message 00:04:54.630 PASS test hold_by_message 00:04:54.630 /home/vagrant/spdk_repo/spdk/test/thread/lock/spdk_lock summary: 00:04:54.630 100014 assertions passed 00:04:54.630 0 assertions failed 00:04:54.630 00:04:54.630 real 0m1.161s 00:04:54.630 user 0m1.063s 00:04:54.630 sys 0m0.539s 00:04:54.630 21:04:05 thread.thread_spdk_lock -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.630 21:04:05 thread.thread_spdk_lock -- common/autotest_common.sh@10 -- # set +x 00:04:54.630 ************************************ 00:04:54.630 END TEST thread_spdk_lock 00:04:54.630 ************************************ 00:04:54.630 21:04:05 thread -- common/autotest_common.sh@1142 -- # return 0 00:04:54.630 00:04:54.630 real 0m4.872s 00:04:54.630 user 0m3.569s 00:04:54.630 sys 0m1.771s 00:04:54.630 21:04:05 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:54.630 21:04:05 thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.630 ************************************ 00:04:54.630 END TEST thread 00:04:54.630 ************************************ 00:04:54.630 21:04:06 -- common/autotest_common.sh@1142 -- # return 0 00:04:54.630 21:04:06 -- spdk/autotest.sh@183 -- # run_test accel /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:54.630 21:04:06 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:54.630 21:04:06 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:54.630 21:04:06 -- common/autotest_common.sh@10 -- # set +x 00:04:54.630 ************************************ 00:04:54.630 START TEST accel 00:04:54.630 ************************************ 00:04:54.630 21:04:06 accel -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel.sh 00:04:54.889 * Looking for test storage... 00:04:54.889 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:04:54.889 21:04:06 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:04:54.889 21:04:06 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:04:54.889 21:04:06 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:54.889 21:04:06 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=46705 00:04:54.889 21:04:06 accel -- accel/accel.sh@63 -- # waitforlisten 46705 00:04:54.889 21:04:06 accel -- common/autotest_common.sh@829 -- # '[' -z 46705 ']' 00:04:54.889 21:04:06 accel -- accel/accel.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -c /tmp//sh-np.FPxUeH 00:04:54.889 21:04:06 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:54.889 21:04:06 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:54.889 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:54.889 21:04:06 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:54.889 21:04:06 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:54.889 21:04:06 accel -- common/autotest_common.sh@10 -- # set +x 00:04:54.889 [2024-07-14 21:04:06.196050] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:54.889 [2024-07-14 21:04:06.196254] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:55.458 EAL: TSC is not safe to use in SMP mode 00:04:55.458 EAL: TSC is not invariant 00:04:55.458 [2024-07-14 21:04:06.711884] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.458 [2024-07-14 21:04:06.789927] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:55.458 21:04:06 accel -- accel/accel.sh@61 -- # build_accel_config 00:04:55.458 21:04:06 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:55.458 21:04:06 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:55.458 21:04:06 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:55.458 21:04:06 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:55.458 21:04:06 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:55.458 21:04:06 accel -- accel/accel.sh@40 -- # local IFS=, 00:04:55.458 21:04:06 accel -- accel/accel.sh@41 -- # jq -r . 00:04:55.458 [2024-07-14 21:04:06.800700] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@862 -- # return 0 00:04:55.718 21:04:07 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:04:55.718 21:04:07 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:04:55.718 21:04:07 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:04:55.718 21:04:07 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:04:55.718 21:04:07 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:04:55.718 21:04:07 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@10 -- # set +x 00:04:55.718 21:04:07 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:55.718 21:04:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # IFS== 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:55.718 21:04:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:55.718 21:04:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # IFS== 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:55.718 21:04:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:55.718 21:04:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # IFS== 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:55.718 21:04:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:55.718 21:04:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # IFS== 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:55.718 21:04:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:55.718 21:04:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # IFS== 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:55.718 21:04:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:55.718 21:04:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # IFS== 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:55.718 21:04:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:55.718 21:04:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # IFS== 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:55.718 21:04:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:55.718 21:04:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # IFS== 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:55.718 21:04:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:55.718 21:04:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # IFS== 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:55.718 21:04:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:55.718 21:04:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # IFS== 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:55.718 21:04:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:55.718 21:04:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # IFS== 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:55.718 21:04:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:55.718 21:04:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # IFS== 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:55.718 21:04:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:55.718 21:04:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # IFS== 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:55.718 21:04:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:55.718 21:04:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # IFS== 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:55.718 21:04:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:55.718 21:04:07 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # IFS== 00:04:55.718 21:04:07 accel -- accel/accel.sh@72 -- # read -r opc module 00:04:55.718 21:04:07 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:04:55.718 21:04:07 accel -- accel/accel.sh@75 -- # killprocess 46705 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@948 -- # '[' -z 46705 ']' 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@952 -- # kill -0 46705 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@953 -- # uname 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@956 -- # ps -c -o command 46705 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@956 -- # tail -1 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:04:55.718 killing process with pid 46705 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 46705' 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@967 -- # kill 46705 00:04:55.718 21:04:07 accel -- common/autotest_common.sh@972 -- # wait 46705 00:04:55.978 21:04:07 accel -- accel/accel.sh@76 -- # trap - ERR 00:04:55.978 21:04:07 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:04:55.978 21:04:07 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:04:55.978 21:04:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.978 21:04:07 accel -- common/autotest_common.sh@10 -- # set +x 00:04:55.978 21:04:07 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:04:55.978 21:04:07 accel.accel_help -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.1ZwvOQ -h 00:04:55.978 21:04:07 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.978 21:04:07 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:04:55.978 21:04:07 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:55.978 21:04:07 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:04:55.978 21:04:07 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:55.978 21:04:07 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.978 21:04:07 accel -- common/autotest_common.sh@10 -- # set +x 00:04:55.978 ************************************ 00:04:55.978 START TEST accel_missing_filename 00:04:55.978 ************************************ 00:04:55.978 21:04:07 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:04:55.978 21:04:07 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:04:55.978 21:04:07 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:04:55.978 21:04:07 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:55.978 21:04:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.978 21:04:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:55.978 21:04:07 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.978 21:04:07 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:04:55.978 21:04:07 accel.accel_missing_filename -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.MufHZV -t 1 -w compress 00:04:55.978 [2024-07-14 21:04:07.461540] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:55.978 [2024-07-14 21:04:07.461731] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:56.545 EAL: TSC is not safe to use in SMP mode 00:04:56.545 EAL: TSC is not invariant 00:04:56.545 [2024-07-14 21:04:07.944125] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.545 [2024-07-14 21:04:08.016609] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:56.545 21:04:08 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:04:56.545 21:04:08 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:56.545 21:04:08 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:56.545 21:04:08 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:56.545 21:04:08 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:56.545 21:04:08 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:56.545 21:04:08 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:04:56.545 21:04:08 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:04:56.545 [2024-07-14 21:04:08.027302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:56.545 [2024-07-14 21:04:08.029801] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:56.545 [2024-07-14 21:04:08.064244] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:04:56.804 A filename is required. 00:04:56.804 21:04:08 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:04:56.804 21:04:08 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:56.804 21:04:08 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:04:56.804 21:04:08 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:04:56.804 21:04:08 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:04:56.804 21:04:08 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:56.804 00:04:56.804 real 0m0.725s 00:04:56.804 user 0m0.204s 00:04:56.804 sys 0m0.517s 00:04:56.804 21:04:08 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.804 ************************************ 00:04:56.804 END TEST accel_missing_filename 00:04:56.804 ************************************ 00:04:56.804 21:04:08 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:04:56.804 21:04:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:56.804 21:04:08 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:56.804 21:04:08 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:56.804 21:04:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.804 21:04:08 accel -- common/autotest_common.sh@10 -- # set +x 00:04:56.804 ************************************ 00:04:56.804 START TEST accel_compress_verify 00:04:56.804 ************************************ 00:04:56.804 21:04:08 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:56.804 21:04:08 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:04:56.805 21:04:08 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:56.805 21:04:08 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:56.805 21:04:08 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.805 21:04:08 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:56.805 21:04:08 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:56.805 21:04:08 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:56.805 21:04:08 accel.accel_compress_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.NasiLE -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:04:56.805 [2024-07-14 21:04:08.234831] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:56.805 [2024-07-14 21:04:08.235091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:57.373 EAL: TSC is not safe to use in SMP mode 00:04:57.373 EAL: TSC is not invariant 00:04:57.373 [2024-07-14 21:04:08.760353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.373 [2024-07-14 21:04:08.853442] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:57.373 21:04:08 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:04:57.373 21:04:08 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:57.373 21:04:08 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:57.373 21:04:08 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:57.373 21:04:08 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:57.373 21:04:08 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:57.373 21:04:08 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:04:57.373 21:04:08 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:04:57.373 [2024-07-14 21:04:08.866775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.373 [2024-07-14 21:04:08.870025] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:57.373 [2024-07-14 21:04:08.907323] accel_perf.c:1464:main: *ERROR*: ERROR starting application 00:04:57.632 00:04:57.632 Compression does not support the verify option, aborting. 00:04:57.632 21:04:09 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=211 00:04:57.632 21:04:09 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.632 21:04:09 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=83 00:04:57.632 21:04:09 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:04:57.632 21:04:09 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:04:57.632 21:04:09 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:57.632 00:04:57.632 real 0m0.810s 00:04:57.632 user 0m0.232s 00:04:57.632 sys 0m0.575s 00:04:57.632 21:04:09 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.632 21:04:09 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:04:57.632 ************************************ 00:04:57.632 END TEST accel_compress_verify 00:04:57.632 ************************************ 00:04:57.632 21:04:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:57.632 21:04:09 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:04:57.632 21:04:09 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:04:57.632 21:04:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.632 21:04:09 accel -- common/autotest_common.sh@10 -- # set +x 00:04:57.632 ************************************ 00:04:57.633 START TEST accel_wrong_workload 00:04:57.633 ************************************ 00:04:57.633 21:04:09 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:04:57.633 21:04:09 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:04:57.633 21:04:09 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:04:57.633 21:04:09 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:57.633 21:04:09 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.633 21:04:09 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:57.633 21:04:09 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.633 21:04:09 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:04:57.633 21:04:09 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.jvtI9F -t 1 -w foobar 00:04:57.633 Unsupported workload type: foobar 00:04:57.633 [2024-07-14 21:04:09.083725] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:04:57.633 accel_perf options: 00:04:57.633 [-h help message] 00:04:57.633 [-q queue depth per core] 00:04:57.633 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:57.633 [-T number of threads per core 00:04:57.633 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:57.633 [-t time in seconds] 00:04:57.633 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:57.633 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:57.633 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:57.633 [-l for compress/decompress workloads, name of uncompressed input file 00:04:57.633 [-S for crc32c workload, use this seed value (default 0) 00:04:57.633 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:57.633 [-f for fill workload, use this BYTE value (default 255) 00:04:57.633 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:57.633 [-y verify result if this switch is on] 00:04:57.633 [-a tasks to allocate per core (default: same value as -q)] 00:04:57.633 Can be used to spread operations across a wider range of memory. 00:04:57.633 21:04:09 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:04:57.633 21:04:09 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.633 21:04:09 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:57.633 21:04:09 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:57.633 00:04:57.633 real 0m0.008s 00:04:57.633 user 0m0.002s 00:04:57.633 sys 0m0.010s 00:04:57.633 21:04:09 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.633 ************************************ 00:04:57.633 END TEST accel_wrong_workload 00:04:57.633 21:04:09 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:04:57.633 ************************************ 00:04:57.633 21:04:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:57.633 21:04:09 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:04:57.633 21:04:09 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:04:57.633 21:04:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.633 21:04:09 accel -- common/autotest_common.sh@10 -- # set +x 00:04:57.633 ************************************ 00:04:57.633 START TEST accel_negative_buffers 00:04:57.633 ************************************ 00:04:57.633 21:04:09 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:04:57.633 21:04:09 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:04:57.633 21:04:09 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:04:57.633 21:04:09 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:04:57.633 21:04:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.633 21:04:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:04:57.633 21:04:09 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:57.633 21:04:09 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:04:57.633 21:04:09 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.dosKyW -t 1 -w xor -y -x -1 00:04:57.633 -x option must be non-negative. 00:04:57.633 [2024-07-14 21:04:09.137242] app.c:1450:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:04:57.633 accel_perf options: 00:04:57.633 [-h help message] 00:04:57.633 [-q queue depth per core] 00:04:57.633 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:04:57.633 [-T number of threads per core 00:04:57.633 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:04:57.633 [-t time in seconds] 00:04:57.633 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:04:57.633 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:04:57.633 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:04:57.633 [-l for compress/decompress workloads, name of uncompressed input file 00:04:57.633 [-S for crc32c workload, use this seed value (default 0) 00:04:57.633 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:04:57.633 [-f for fill workload, use this BYTE value (default 255) 00:04:57.633 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:04:57.633 [-y verify result if this switch is on] 00:04:57.633 [-a tasks to allocate per core (default: same value as -q)] 00:04:57.633 Can be used to spread operations across a wider range of memory. 00:04:57.633 21:04:09 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:04:57.633 21:04:09 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:57.633 21:04:09 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:57.633 21:04:09 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:57.633 00:04:57.633 real 0m0.010s 00:04:57.633 user 0m0.007s 00:04:57.633 sys 0m0.002s 00:04:57.633 21:04:09 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:57.633 ************************************ 00:04:57.633 END TEST accel_negative_buffers 00:04:57.633 21:04:09 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:04:57.633 ************************************ 00:04:57.633 21:04:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:57.633 21:04:09 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:04:57.633 21:04:09 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:57.633 21:04:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:57.633 21:04:09 accel -- common/autotest_common.sh@10 -- # set +x 00:04:57.892 ************************************ 00:04:57.892 START TEST accel_crc32c 00:04:57.892 ************************************ 00:04:57.892 21:04:09 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:04:57.892 21:04:09 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:04:57.892 21:04:09 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:04:57.892 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:57.892 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:57.892 21:04:09 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:04:57.892 21:04:09 accel.accel_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.tsXlHI -t 1 -w crc32c -S 32 -y 00:04:57.892 [2024-07-14 21:04:09.189135] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:57.892 [2024-07-14 21:04:09.189401] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:04:58.460 EAL: TSC is not safe to use in SMP mode 00:04:58.460 EAL: TSC is not invariant 00:04:58.460 [2024-07-14 21:04:09.711596] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:58.460 [2024-07-14 21:04:09.830863] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:04:58.460 [2024-07-14 21:04:09.841055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:04:58.460 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.461 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.461 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.461 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:04:58.461 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.461 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.461 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.461 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.461 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.461 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.461 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:58.461 21:04:09 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:58.461 21:04:09 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:58.461 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:58.461 21:04:09 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.838 21:04:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.838 21:04:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.838 21:04:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.838 21:04:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.838 21:04:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.838 21:04:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.838 21:04:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.838 21:04:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.838 21:04:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.838 21:04:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.838 21:04:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.838 21:04:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.839 21:04:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.839 21:04:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.839 21:04:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.839 21:04:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.839 21:04:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.839 21:04:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.839 21:04:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.839 21:04:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.839 21:04:10 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:04:59.839 21:04:10 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:04:59.839 21:04:10 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:04:59.839 21:04:10 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:04:59.839 21:04:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:04:59.839 21:04:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:04:59.839 21:04:10 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:04:59.839 00:04:59.839 real 0m1.820s 00:04:59.839 user 0m1.255s 00:04:59.839 sys 0m0.573s 00:04:59.839 21:04:11 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:59.839 ************************************ 00:04:59.839 END TEST accel_crc32c 00:04:59.839 ************************************ 00:04:59.839 21:04:11 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:04:59.839 21:04:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:04:59.839 21:04:11 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:04:59.839 21:04:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:04:59.839 21:04:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:59.839 21:04:11 accel -- common/autotest_common.sh@10 -- # set +x 00:04:59.839 ************************************ 00:04:59.839 START TEST accel_crc32c_C2 00:04:59.839 ************************************ 00:04:59.839 21:04:11 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:04:59.839 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:04:59.839 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:04:59.839 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:04:59.839 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:04:59.839 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:04:59.839 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.8c2G3m -t 1 -w crc32c -y -C 2 00:04:59.839 [2024-07-14 21:04:11.057540] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:04:59.839 [2024-07-14 21:04:11.057703] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:00.098 EAL: TSC is not safe to use in SMP mode 00:05:00.098 EAL: TSC is not invariant 00:05:00.098 [2024-07-14 21:04:11.597351] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:00.356 [2024-07-14 21:04:11.682999] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:00.357 [2024-07-14 21:04:11.690593] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:00.357 21:04:11 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:01.293 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:01.294 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:01.294 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:01.294 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:01.294 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:05:01.294 21:04:12 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:01.294 00:05:01.294 real 0m1.787s 00:05:01.294 user 0m1.222s 00:05:01.294 sys 0m0.576s 00:05:01.294 21:04:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:01.294 21:04:12 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:01.294 ************************************ 00:05:01.294 END TEST accel_crc32c_C2 00:05:01.294 ************************************ 00:05:01.553 21:04:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:01.553 21:04:12 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:05:01.553 21:04:12 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:01.553 21:04:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:01.553 21:04:12 accel -- common/autotest_common.sh@10 -- # set +x 00:05:01.553 ************************************ 00:05:01.553 START TEST accel_copy 00:05:01.553 ************************************ 00:05:01.553 21:04:12 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:05:01.553 21:04:12 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:01.553 21:04:12 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:05:01.553 21:04:12 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:01.553 21:04:12 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:01.553 21:04:12 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:05:01.553 21:04:12 accel.accel_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.RxuPgk -t 1 -w copy -y 00:05:01.553 [2024-07-14 21:04:12.895969] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:01.553 [2024-07-14 21:04:12.896253] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:02.120 EAL: TSC is not safe to use in SMP mode 00:05:02.120 EAL: TSC is not invariant 00:05:02.120 [2024-07-14 21:04:13.411059] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:02.120 [2024-07-14 21:04:13.493666] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:05:02.120 [2024-07-14 21:04:13.504622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:02.120 21:04:13 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:05:03.494 21:04:14 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:03.494 00:05:03.494 real 0m1.773s 00:05:03.494 user 0m1.214s 00:05:03.494 sys 0m0.568s 00:05:03.494 21:04:14 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:03.494 21:04:14 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:05:03.494 ************************************ 00:05:03.494 END TEST accel_copy 00:05:03.494 ************************************ 00:05:03.494 21:04:14 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:03.494 21:04:14 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:03.494 21:04:14 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:03.494 21:04:14 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:03.494 21:04:14 accel -- common/autotest_common.sh@10 -- # set +x 00:05:03.494 ************************************ 00:05:03.494 START TEST accel_fill 00:05:03.494 ************************************ 00:05:03.494 21:04:14 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:03.494 21:04:14 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:05:03.494 21:04:14 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:05:03.494 21:04:14 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:03.494 21:04:14 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:03.494 21:04:14 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:03.494 21:04:14 accel.accel_fill -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.FlOjQT -t 1 -w fill -f 128 -q 64 -a 64 -y 00:05:03.494 [2024-07-14 21:04:14.717694] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:03.495 [2024-07-14 21:04:14.717963] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:03.753 EAL: TSC is not safe to use in SMP mode 00:05:03.754 EAL: TSC is not invariant 00:05:03.754 [2024-07-14 21:04:15.207987] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:03.754 [2024-07-14 21:04:15.281741] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:05:03.754 [2024-07-14 21:04:15.294502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:03.754 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.028 21:04:15 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:05:04.988 21:04:16 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:04.988 00:05:04.988 real 0m1.738s 00:05:04.988 user 0m1.217s 00:05:04.988 sys 0m0.529s 00:05:04.988 21:04:16 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:04.988 21:04:16 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:05:04.988 ************************************ 00:05:04.988 END TEST accel_fill 00:05:04.988 ************************************ 00:05:04.988 21:04:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:04.988 21:04:16 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:05:04.988 21:04:16 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:04.988 21:04:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:04.988 21:04:16 accel -- common/autotest_common.sh@10 -- # set +x 00:05:04.988 ************************************ 00:05:04.988 START TEST accel_copy_crc32c 00:05:04.988 ************************************ 00:05:04.988 21:04:16 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:05:04.988 21:04:16 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:05:04.988 21:04:16 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:05:04.988 21:04:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:04.988 21:04:16 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:04.988 21:04:16 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:05:04.988 21:04:16 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.7ven5h -t 1 -w copy_crc32c -y 00:05:04.988 [2024-07-14 21:04:16.504751] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:04.988 [2024-07-14 21:04:16.505052] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:05.556 EAL: TSC is not safe to use in SMP mode 00:05:05.556 EAL: TSC is not invariant 00:05:05.556 [2024-07-14 21:04:17.008240] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.556 [2024-07-14 21:04:17.082433] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:05:05.556 [2024-07-14 21:04:17.092656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.556 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:05:05.557 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.557 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.557 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.557 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.557 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.557 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.557 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:05.557 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:05.815 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:05.815 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:05.815 21:04:17 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:06.751 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:06.751 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:06.751 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:06.751 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:06.751 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:06.751 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:06.752 00:05:06.752 real 0m1.738s 00:05:06.752 user 0m1.180s 00:05:06.752 sys 0m0.568s 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:06.752 21:04:18 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:05:06.752 ************************************ 00:05:06.752 END TEST accel_copy_crc32c 00:05:06.752 ************************************ 00:05:06.752 21:04:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:06.752 21:04:18 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:05:06.752 21:04:18 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:06.752 21:04:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:06.752 21:04:18 accel -- common/autotest_common.sh@10 -- # set +x 00:05:06.752 ************************************ 00:05:06.752 START TEST accel_copy_crc32c_C2 00:05:06.752 ************************************ 00:05:06.752 21:04:18 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:05:06.752 21:04:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:05:06.752 21:04:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:05:06.752 21:04:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:06.752 21:04:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:05:06.752 21:04:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:06.752 21:04:18 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.f4RIE5 -t 1 -w copy_crc32c -y -C 2 00:05:06.752 [2024-07-14 21:04:18.293918] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:06.752 [2024-07-14 21:04:18.294192] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:07.687 EAL: TSC is not safe to use in SMP mode 00:05:07.687 EAL: TSC is not invariant 00:05:07.687 [2024-07-14 21:04:18.977495] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:07.687 [2024-07-14 21:04:19.065260] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:05:07.687 [2024-07-14 21:04:19.073569] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.687 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:07.688 21:04:19 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:09.064 00:05:09.064 real 0m1.943s 00:05:09.064 user 0m1.232s 00:05:09.064 sys 0m0.719s 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:09.064 ************************************ 00:05:09.064 END TEST accel_copy_crc32c_C2 00:05:09.064 ************************************ 00:05:09.064 21:04:20 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:05:09.064 21:04:20 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:09.064 21:04:20 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:05:09.064 21:04:20 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:09.064 21:04:20 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:09.064 21:04:20 accel -- common/autotest_common.sh@10 -- # set +x 00:05:09.064 ************************************ 00:05:09.064 START TEST accel_dualcast 00:05:09.064 ************************************ 00:05:09.064 21:04:20 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:05:09.064 21:04:20 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:05:09.064 21:04:20 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:05:09.064 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.064 21:04:20 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:05:09.064 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.064 21:04:20 accel.accel_dualcast -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.wpUG7S -t 1 -w dualcast -y 00:05:09.064 [2024-07-14 21:04:20.285981] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:09.064 [2024-07-14 21:04:20.286247] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:09.323 EAL: TSC is not safe to use in SMP mode 00:05:09.323 EAL: TSC is not invariant 00:05:09.323 [2024-07-14 21:04:20.803620] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.582 [2024-07-14 21:04:20.881016] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:05:09.582 [2024-07-14 21:04:20.891084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:09.582 21:04:20 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:05:10.518 21:04:22 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:10.518 00:05:10.518 real 0m1.759s 00:05:10.518 user 0m1.199s 00:05:10.518 sys 0m0.570s 00:05:10.518 21:04:22 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:10.518 ************************************ 00:05:10.518 END TEST accel_dualcast 00:05:10.518 ************************************ 00:05:10.518 21:04:22 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:05:10.776 21:04:22 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:10.776 21:04:22 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:05:10.776 21:04:22 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:10.776 21:04:22 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:10.776 21:04:22 accel -- common/autotest_common.sh@10 -- # set +x 00:05:10.776 ************************************ 00:05:10.776 START TEST accel_compare 00:05:10.776 ************************************ 00:05:10.776 21:04:22 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:05:10.776 21:04:22 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:05:10.776 21:04:22 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:05:10.776 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:10.776 21:04:22 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:05:10.776 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:10.776 21:04:22 accel.accel_compare -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ETAGI8 -t 1 -w compare -y 00:05:10.776 [2024-07-14 21:04:22.096726] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:10.776 [2024-07-14 21:04:22.097068] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:11.343 EAL: TSC is not safe to use in SMP mode 00:05:11.343 EAL: TSC is not invariant 00:05:11.343 [2024-07-14 21:04:22.805202] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:11.343 [2024-07-14 21:04:22.888726] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:05:11.602 [2024-07-14 21:04:22.900626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:11.602 21:04:22 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:05:12.535 21:04:24 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:12.535 00:05:12.535 real 0m1.963s 00:05:12.535 user 0m1.239s 00:05:12.535 sys 0m0.737s 00:05:12.535 21:04:24 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.535 21:04:24 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:05:12.535 ************************************ 00:05:12.535 END TEST accel_compare 00:05:12.535 ************************************ 00:05:12.793 21:04:24 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:12.793 21:04:24 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:05:12.793 21:04:24 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:12.793 21:04:24 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.793 21:04:24 accel -- common/autotest_common.sh@10 -- # set +x 00:05:12.793 ************************************ 00:05:12.793 START TEST accel_xor 00:05:12.793 ************************************ 00:05:12.793 21:04:24 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:05:12.793 21:04:24 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:12.793 21:04:24 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:12.793 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:12.793 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:12.793 21:04:24 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:05:12.793 21:04:24 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.ko1U40 -t 1 -w xor -y 00:05:12.793 [2024-07-14 21:04:24.107369] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:12.793 [2024-07-14 21:04:24.107625] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:13.052 EAL: TSC is not safe to use in SMP mode 00:05:13.052 EAL: TSC is not invariant 00:05:13.052 [2024-07-14 21:04:24.600056] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.310 [2024-07-14 21:04:24.677414] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:13.310 21:04:24 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:13.310 21:04:24 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:13.310 21:04:24 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:13.310 21:04:24 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:13.310 21:04:24 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:13.310 21:04:24 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:13.310 21:04:24 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:13.310 21:04:24 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:13.310 [2024-07-14 21:04:24.687502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.310 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:13.310 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.310 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.310 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:13.311 21:04:24 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:14.709 00:05:14.709 real 0m1.742s 00:05:14.709 user 0m1.203s 00:05:14.709 sys 0m0.550s 00:05:14.709 21:04:25 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:14.709 21:04:25 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:14.709 ************************************ 00:05:14.709 END TEST accel_xor 00:05:14.709 ************************************ 00:05:14.709 21:04:25 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:14.709 21:04:25 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:05:14.709 21:04:25 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:14.709 21:04:25 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:14.709 21:04:25 accel -- common/autotest_common.sh@10 -- # set +x 00:05:14.709 ************************************ 00:05:14.709 START TEST accel_xor 00:05:14.709 ************************************ 00:05:14.709 21:04:25 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.709 21:04:25 accel.accel_xor -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.6Bkl8a -t 1 -w xor -y -x 3 00:05:14.709 [2024-07-14 21:04:25.895259] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:14.709 [2024-07-14 21:04:25.895533] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:14.967 EAL: TSC is not safe to use in SMP mode 00:05:14.967 EAL: TSC is not invariant 00:05:14.967 [2024-07-14 21:04:26.422500] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.968 [2024-07-14 21:04:26.495311] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:05:14.968 [2024-07-14 21:04:26.507055] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:14.968 21:04:26 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:05:16.344 21:04:27 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:16.344 00:05:16.344 real 0m1.762s 00:05:16.344 user 0m1.202s 00:05:16.344 sys 0m0.567s 00:05:16.344 21:04:27 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.344 21:04:27 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:05:16.344 ************************************ 00:05:16.344 END TEST accel_xor 00:05:16.344 ************************************ 00:05:16.344 21:04:27 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:16.344 21:04:27 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:05:16.344 21:04:27 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:16.344 21:04:27 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.344 21:04:27 accel -- common/autotest_common.sh@10 -- # set +x 00:05:16.344 ************************************ 00:05:16.344 START TEST accel_dif_verify 00:05:16.344 ************************************ 00:05:16.344 21:04:27 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:05:16.344 21:04:27 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:05:16.344 21:04:27 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:05:16.344 21:04:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.344 21:04:27 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.344 21:04:27 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:05:16.344 21:04:27 accel.accel_dif_verify -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.FYXSUN -t 1 -w dif_verify 00:05:16.344 [2024-07-14 21:04:27.706300] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:16.344 [2024-07-14 21:04:27.706570] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:16.912 EAL: TSC is not safe to use in SMP mode 00:05:16.912 EAL: TSC is not invariant 00:05:16.912 [2024-07-14 21:04:28.205918] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.912 [2024-07-14 21:04:28.283669] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:05:16.912 [2024-07-14 21:04:28.295249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:16.912 21:04:28 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:05:18.284 21:04:29 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:18.285 21:04:29 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:05:18.285 21:04:29 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:18.285 00:05:18.285 real 0m1.752s 00:05:18.285 user 0m1.208s 00:05:18.285 sys 0m0.548s 00:05:18.285 21:04:29 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.285 21:04:29 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:05:18.285 ************************************ 00:05:18.285 END TEST accel_dif_verify 00:05:18.285 ************************************ 00:05:18.285 21:04:29 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:18.285 21:04:29 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:05:18.285 21:04:29 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:18.285 21:04:29 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.285 21:04:29 accel -- common/autotest_common.sh@10 -- # set +x 00:05:18.285 ************************************ 00:05:18.285 START TEST accel_dif_generate 00:05:18.285 ************************************ 00:05:18.285 21:04:29 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:05:18.285 21:04:29 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:05:18.285 21:04:29 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:05:18.285 21:04:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.285 21:04:29 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.285 21:04:29 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:05:18.285 21:04:29 accel.accel_dif_generate -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.sXptRA -t 1 -w dif_generate 00:05:18.285 [2024-07-14 21:04:29.506696] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:18.285 [2024-07-14 21:04:29.506874] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:18.542 EAL: TSC is not safe to use in SMP mode 00:05:18.542 EAL: TSC is not invariant 00:05:18.542 [2024-07-14 21:04:30.037158] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.800 [2024-07-14 21:04:30.113462] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:05:18.800 [2024-07-14 21:04:30.126543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:18.800 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:18.801 21:04:30 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:05:19.734 21:04:31 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:19.734 00:05:19.734 real 0m1.773s 00:05:19.734 user 0m1.221s 00:05:19.734 sys 0m0.556s 00:05:19.734 21:04:31 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.734 21:04:31 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:05:19.734 ************************************ 00:05:19.734 END TEST accel_dif_generate 00:05:19.734 ************************************ 00:05:19.992 21:04:31 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:19.992 21:04:31 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:05:19.992 21:04:31 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:19.992 21:04:31 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.992 21:04:31 accel -- common/autotest_common.sh@10 -- # set +x 00:05:19.992 ************************************ 00:05:19.992 START TEST accel_dif_generate_copy 00:05:19.992 ************************************ 00:05:19.992 21:04:31 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:05:19.992 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:05:19.992 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:05:19.992 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:19.992 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:19.992 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:05:19.992 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.HO4qgg -t 1 -w dif_generate_copy 00:05:19.992 [2024-07-14 21:04:31.329052] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:19.992 [2024-07-14 21:04:31.329259] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:20.559 EAL: TSC is not safe to use in SMP mode 00:05:20.559 EAL: TSC is not invariant 00:05:20.559 [2024-07-14 21:04:31.842322] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.559 [2024-07-14 21:04:31.928690] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:05:20.559 [2024-07-14 21:04:31.940053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:20.559 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:20.560 21:04:31 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.935 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.936 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.936 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:05:21.936 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:05:21.936 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:05:21.936 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:05:21.936 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:21.936 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:05:21.936 21:04:33 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:21.936 00:05:21.936 real 0m1.772s 00:05:21.936 user 0m1.222s 00:05:21.936 sys 0m0.560s 00:05:21.936 21:04:33 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:21.936 21:04:33 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:05:21.936 ************************************ 00:05:21.936 END TEST accel_dif_generate_copy 00:05:21.936 ************************************ 00:05:21.936 21:04:33 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:21.936 21:04:33 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:05:21.936 21:04:33 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:21.936 21:04:33 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:21.936 21:04:33 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:21.936 21:04:33 accel -- common/autotest_common.sh@10 -- # set +x 00:05:21.936 ************************************ 00:05:21.936 START TEST accel_comp 00:05:21.936 ************************************ 00:05:21.936 21:04:33 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:21.936 21:04:33 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:05:21.936 21:04:33 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:05:21.936 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:21.936 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:21.936 21:04:33 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:21.936 21:04:33 accel.accel_comp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.o1v4Fa -t 1 -w compress -l /home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:21.936 [2024-07-14 21:04:33.148070] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:21.936 [2024-07-14 21:04:33.148284] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:22.195 EAL: TSC is not safe to use in SMP mode 00:05:22.195 EAL: TSC is not invariant 00:05:22.195 [2024-07-14 21:04:33.643174] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:22.195 [2024-07-14 21:04:33.718884] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:05:22.195 [2024-07-14 21:04:33.727144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.195 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.196 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.196 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.196 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:22.196 21:04:33 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:22.196 21:04:33 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:22.196 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:22.196 21:04:33 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:05:23.572 21:04:34 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:23.572 00:05:23.572 real 0m1.731s 00:05:23.572 user 0m1.217s 00:05:23.572 sys 0m0.525s 00:05:23.572 21:04:34 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:23.572 ************************************ 00:05:23.572 END TEST accel_comp 00:05:23.572 ************************************ 00:05:23.572 21:04:34 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:05:23.572 21:04:34 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:23.572 21:04:34 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:23.572 21:04:34 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:05:23.572 21:04:34 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.572 21:04:34 accel -- common/autotest_common.sh@10 -- # set +x 00:05:23.572 ************************************ 00:05:23.572 START TEST accel_decomp 00:05:23.572 ************************************ 00:05:23.572 21:04:34 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:23.572 21:04:34 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:05:23.572 21:04:34 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:05:23.572 21:04:34 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:23.572 21:04:34 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:23.572 21:04:34 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:23.572 21:04:34 accel.accel_decomp -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.U8KdAV -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y 00:05:23.572 [2024-07-14 21:04:34.927266] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:23.572 [2024-07-14 21:04:34.927475] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:24.137 EAL: TSC is not safe to use in SMP mode 00:05:24.137 EAL: TSC is not invariant 00:05:24.137 [2024-07-14 21:04:35.445271] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.137 [2024-07-14 21:04:35.518447] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:05:24.137 [2024-07-14 21:04:35.528232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:24.137 21:04:35 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:25.523 21:04:36 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:25.523 00:05:25.523 real 0m1.769s 00:05:25.523 user 0m1.220s 00:05:25.523 sys 0m0.559s 00:05:25.523 21:04:36 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:25.523 21:04:36 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:05:25.523 ************************************ 00:05:25.523 END TEST accel_decomp 00:05:25.523 ************************************ 00:05:25.523 21:04:36 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:25.523 21:04:36 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:25.523 21:04:36 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:25.523 21:04:36 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:25.523 21:04:36 accel -- common/autotest_common.sh@10 -- # set +x 00:05:25.523 ************************************ 00:05:25.523 START TEST accel_decomp_full 00:05:25.523 ************************************ 00:05:25.523 21:04:36 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:25.523 21:04:36 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:05:25.523 21:04:36 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:05:25.523 21:04:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:25.523 21:04:36 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:25.523 21:04:36 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:25.524 21:04:36 accel.accel_decomp_full -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.uq95qr -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 00:05:25.524 [2024-07-14 21:04:36.744545] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:25.524 [2024-07-14 21:04:36.744794] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:25.782 EAL: TSC is not safe to use in SMP mode 00:05:25.782 EAL: TSC is not invariant 00:05:25.782 [2024-07-14 21:04:37.243093] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.782 [2024-07-14 21:04:37.319662] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:25.782 21:04:37 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:05:25.782 21:04:37 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:25.782 21:04:37 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:25.782 21:04:37 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:25.782 21:04:37 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:25.782 21:04:37 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:25.782 21:04:37 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:05:25.782 21:04:37 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:05:25.782 [2024-07-14 21:04:37.329741] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.040 21:04:37 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:26.975 21:04:38 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:26.975 00:05:26.975 real 0m1.755s 00:05:26.975 user 0m1.216s 00:05:26.975 sys 0m0.541s 00:05:26.975 21:04:38 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.975 21:04:38 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:05:26.975 ************************************ 00:05:26.975 END TEST accel_decomp_full 00:05:26.975 ************************************ 00:05:27.234 21:04:38 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:27.234 21:04:38 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:27.234 21:04:38 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:27.234 21:04:38 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:27.234 21:04:38 accel -- common/autotest_common.sh@10 -- # set +x 00:05:27.234 ************************************ 00:05:27.234 START TEST accel_decomp_mcore 00:05:27.234 ************************************ 00:05:27.234 21:04:38 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:27.234 21:04:38 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:27.234 21:04:38 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:27.234 21:04:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.234 21:04:38 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.234 21:04:38 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:27.234 21:04:38 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.PwbK8f -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -m 0xf 00:05:27.234 [2024-07-14 21:04:38.546449] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:27.234 [2024-07-14 21:04:38.546735] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:27.800 EAL: TSC is not safe to use in SMP mode 00:05:27.800 EAL: TSC is not invariant 00:05:27.800 [2024-07-14 21:04:39.060083] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:27.800 [2024-07-14 21:04:39.136459] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:27.800 [2024-07-14 21:04:39.136547] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:27.800 [2024-07-14 21:04:39.136555] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:27.800 [2024-07-14 21:04:39.136562] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:27.800 [2024-07-14 21:04:39.148324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:27.800 [2024-07-14 21:04:39.148489] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.800 [2024-07-14 21:04:39.148392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:27.800 [2024-07-14 21:04:39.148479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.800 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:27.801 21:04:39 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:29.176 00:05:29.176 real 0m1.776s 00:05:29.176 user 0m4.352s 00:05:29.176 sys 0m0.565s 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:29.176 21:04:40 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:29.176 ************************************ 00:05:29.176 END TEST accel_decomp_mcore 00:05:29.176 ************************************ 00:05:29.176 21:04:40 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:29.176 21:04:40 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:29.176 21:04:40 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:29.176 21:04:40 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:29.176 21:04:40 accel -- common/autotest_common.sh@10 -- # set +x 00:05:29.176 ************************************ 00:05:29.176 START TEST accel_decomp_full_mcore 00:05:29.176 ************************************ 00:05:29.176 21:04:40 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:29.176 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:05:29.176 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:05:29.176 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.176 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.176 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:29.176 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.BOtwsf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -m 0xf 00:05:29.177 [2024-07-14 21:04:40.367914] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:29.177 [2024-07-14 21:04:40.368091] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:29.435 EAL: TSC is not safe to use in SMP mode 00:05:29.435 EAL: TSC is not invariant 00:05:29.435 [2024-07-14 21:04:40.896140] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:29.435 [2024-07-14 21:04:40.976298] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:29.435 [2024-07-14 21:04:40.976358] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:29.435 [2024-07-14 21:04:40.976381] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:29.435 [2024-07-14 21:04:40.976388] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:05:29.435 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:05:29.435 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:29.435 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:29.435 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:29.435 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:29.435 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:29.435 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:05:29.435 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:05:29.693 [2024-07-14 21:04:40.989150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.693 [2024-07-14 21:04:40.989010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.693 [2024-07-14 21:04:40.989066] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.693 [2024-07-14 21:04:40.989143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.693 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:05:29.694 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.694 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:05:29.694 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.694 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.694 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:29.694 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.694 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.694 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.694 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:29.694 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.694 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.694 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.694 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:05:29.694 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.694 21:04:40 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:29.694 21:04:41 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.629 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:30.630 00:05:30.630 real 0m1.794s 00:05:30.630 user 0m4.371s 00:05:30.630 sys 0m0.574s 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:30.630 21:04:42 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:05:30.630 ************************************ 00:05:30.630 END TEST accel_decomp_full_mcore 00:05:30.630 ************************************ 00:05:30.889 21:04:42 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:30.889 21:04:42 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:30.889 21:04:42 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:30.889 21:04:42 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:30.889 21:04:42 accel -- common/autotest_common.sh@10 -- # set +x 00:05:30.889 ************************************ 00:05:30.889 START TEST accel_decomp_mthread 00:05:30.889 ************************************ 00:05:30.889 21:04:42 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:30.889 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:30.889 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:30.889 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:30.889 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:30.889 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:30.889 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.XIZYTm -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -T 2 00:05:30.889 [2024-07-14 21:04:42.207064] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:30.889 [2024-07-14 21:04:42.207347] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:31.456 EAL: TSC is not safe to use in SMP mode 00:05:31.456 EAL: TSC is not invariant 00:05:31.456 [2024-07-14 21:04:42.708195] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:31.456 [2024-07-14 21:04:42.792721] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:31.456 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:31.456 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:31.456 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:31.456 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:31.456 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:31.456 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:31.456 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:31.456 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:31.456 [2024-07-14 21:04:42.800971] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:31.456 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.456 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.456 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.456 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.456 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.456 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:31.457 21:04:42 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:32.831 21:04:43 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:32.831 00:05:32.832 real 0m1.758s 00:05:32.832 user 0m1.224s 00:05:32.832 sys 0m0.544s 00:05:32.832 21:04:43 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:32.832 21:04:43 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:32.832 ************************************ 00:05:32.832 END TEST accel_decomp_mthread 00:05:32.832 ************************************ 00:05:32.832 21:04:43 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:32.832 21:04:43 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:32.832 21:04:43 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:05:32.832 21:04:43 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:32.832 21:04:43 accel -- common/autotest_common.sh@10 -- # set +x 00:05:32.832 ************************************ 00:05:32.832 START TEST accel_decomp_full_mthread 00:05:32.832 ************************************ 00:05:32.832 21:04:44 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:32.832 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:05:32.832 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:05:32.832 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:32.832 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:32.832 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:32.832 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/examples/accel_perf -c /tmp//sh-np.n1l1AZ -t 1 -w decompress -l /home/vagrant/spdk_repo/spdk/test/accel/bib -y -o 0 -T 2 00:05:32.832 [2024-07-14 21:04:44.012735] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:32.832 [2024-07-14 21:04:44.012912] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:33.091 EAL: TSC is not safe to use in SMP mode 00:05:33.091 EAL: TSC is not invariant 00:05:33.091 [2024-07-14 21:04:44.540331] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.091 [2024-07-14 21:04:44.619778] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:05:33.091 [2024-07-14 21:04:44.632378] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.091 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/home/vagrant/spdk_repo/spdk/test/accel/bib 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:33.353 21:04:44 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.286 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:34.286 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:34.286 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:34.286 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.286 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:34.286 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:05:34.287 00:05:34.287 real 0m1.813s 00:05:34.287 user 0m1.266s 00:05:34.287 sys 0m0.558s 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:34.287 ************************************ 00:05:34.287 END TEST accel_decomp_full_mthread 00:05:34.287 21:04:45 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:05:34.287 ************************************ 00:05:34.545 21:04:45 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:34.545 21:04:45 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:05:34.546 21:04:45 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.Kng6U5 00:05:34.546 21:04:45 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:34.546 21:04:45 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:34.546 21:04:45 accel -- common/autotest_common.sh@10 -- # set +x 00:05:34.546 ************************************ 00:05:34.546 START TEST accel_dif_functional_tests 00:05:34.546 ************************************ 00:05:34.546 21:04:45 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/dif/dif -c /tmp//sh-np.Kng6U5 00:05:34.546 [2024-07-14 21:04:45.877721] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:34.546 [2024-07-14 21:04:45.878024] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:35.113 EAL: TSC is not safe to use in SMP mode 00:05:35.113 EAL: TSC is not invariant 00:05:35.113 [2024-07-14 21:04:46.396183] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.113 [2024-07-14 21:04:46.472334] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:35.113 [2024-07-14 21:04:46.472399] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:35.113 [2024-07-14 21:04:46.472423] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:35.113 21:04:46 accel -- accel/accel.sh@137 -- # build_accel_config 00:05:35.113 21:04:46 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.113 21:04:46 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.113 21:04:46 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.113 21:04:46 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.113 21:04:46 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.113 21:04:46 accel -- accel/accel.sh@40 -- # local IFS=, 00:05:35.113 21:04:46 accel -- accel/accel.sh@41 -- # jq -r . 00:05:35.113 [2024-07-14 21:04:46.484789] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.113 [2024-07-14 21:04:46.484713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.113 [2024-07-14 21:04:46.484786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.113 00:05:35.113 00:05:35.113 CUnit - A unit testing framework for C - Version 2.1-3 00:05:35.113 http://cunit.sourceforge.net/ 00:05:35.113 00:05:35.113 00:05:35.113 Suite: accel_dif 00:05:35.113 Test: verify: DIF generated, GUARD check ...passed 00:05:35.113 Test: verify: DIF generated, APPTAG check ...passed 00:05:35.113 Test: verify: DIF generated, REFTAG check ...passed 00:05:35.113 Test: verify: DIF not generated, GUARD check ...passed 00:05:35.113 Test: verify: DIF not generated, APPTAG check ...passed 00:05:35.113 Test: verify: DIF not generated, REFTAG check ...passed 00:05:35.113 Test: verify: APPTAG correct, APPTAG check ...passed 00:05:35.113 Test: verify: APPTAG incorrect, APPTAG check ...passed 00:05:35.113 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:05:35.113 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:05:35.113 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:05:35.113 Test: verify: REFTAG_INIT incorrect, REFTAG check ...passed 00:05:35.113 Test: verify copy: DIF generated, GUARD check ...passed 00:05:35.113 Test: verify copy: DIF generated, APPTAG check ...[2024-07-14 21:04:46.499185] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:35.113 [2024-07-14 21:04:46.499243] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:35.113 [2024-07-14 21:04:46.499263] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:35.113 [2024-07-14 21:04:46.499312] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:05:35.113 [2024-07-14 21:04:46.499405] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:05:35.113 passed 00:05:35.113 Test: verify copy: DIF generated, REFTAG check ...passed 00:05:35.113 Test: verify copy: DIF not generated, GUARD check ...passed 00:05:35.113 Test: verify copy: DIF not generated, APPTAG check ...passed 00:05:35.113 Test: verify copy: DIF not generated, REFTAG check ...passed 00:05:35.114 Test: generate copy: DIF generated, GUARD check ...passed 00:05:35.114 Test: generate copy: DIF generated, APTTAG check ...passed 00:05:35.114 Test: generate copy: DIF generated, REFTAG check ...passed 00:05:35.114 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:05:35.114 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:05:35.114 Test: generate copy: DIF generated, no REFTAG check flag set ...[2024-07-14 21:04:46.499473] dif.c: 828:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:05:35.114 [2024-07-14 21:04:46.499494] dif.c: 843:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:05:35.114 [2024-07-14 21:04:46.499515] dif.c: 778:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:05:35.114 passed 00:05:35.114 Test: generate copy: iovecs-len validate ...passed 00:05:35.114 Test: generate copy: buffer alignment validate ...passed 00:05:35.114 00:05:35.114 Run Summary: Type Total Ran Passed Failed Inactive 00:05:35.114 suites 1 1 n/a 0 0 00:05:35.114 tests 26 26 26 0 0 00:05:35.114 asserts 115 115 115 0 n/a 00:05:35.114 00:05:35.114 Elapsed time = 0.000 seconds 00:05:35.114 [2024-07-14 21:04:46.499629] dif.c:1190:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:05:35.373 00:05:35.373 real 0m0.795s 00:05:35.373 user 0m0.389s 00:05:35.373 sys 0m0.544s 00:05:35.373 21:04:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.373 21:04:46 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:05:35.373 ************************************ 00:05:35.373 END TEST accel_dif_functional_tests 00:05:35.373 ************************************ 00:05:35.373 21:04:46 accel -- common/autotest_common.sh@1142 -- # return 0 00:05:35.373 00:05:35.373 real 0m40.674s 00:05:35.373 user 0m32.954s 00:05:35.373 sys 0m14.481s 00:05:35.373 ************************************ 00:05:35.373 END TEST accel 00:05:35.373 ************************************ 00:05:35.373 21:04:46 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:05:35.373 21:04:46 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:05:35.373 21:04:46 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:35.373 21:04:46 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:05:35.373 21:04:46 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.373 21:04:46 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.373 21:04:46 accel -- common/autotest_common.sh@10 -- # set +x 00:05:35.373 21:04:46 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.373 21:04:46 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:05:35.373 21:04:46 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.373 21:04:46 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.373 21:04:46 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.373 21:04:46 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:05:35.373 21:04:46 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.373 21:04:46 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.373 21:04:46 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:05:35.373 21:04:46 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.373 21:04:46 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.373 21:04:46 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:05:35.373 21:04:46 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:05:35.373 21:04:46 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:05:35.373 21:04:46 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:05:35.373 21:04:46 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:05:35.373 21:04:46 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:05:35.373 21:04:46 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:05:35.373 21:04:46 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:05:35.373 21:04:46 -- common/autotest_common.sh@1142 -- # return 0 00:05:35.373 21:04:46 -- spdk/autotest.sh@184 -- # run_test accel_rpc /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:35.373 21:04:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:35.373 21:04:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:35.373 21:04:46 -- common/autotest_common.sh@10 -- # set +x 00:05:35.373 ************************************ 00:05:35.373 START TEST accel_rpc 00:05:35.373 ************************************ 00:05:35.373 21:04:46 accel_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/accel/accel_rpc.sh 00:05:35.373 * Looking for test storage... 00:05:35.373 * Found test storage at /home/vagrant/spdk_repo/spdk/test/accel 00:05:35.373 21:04:46 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:35.373 21:04:46 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=47459 00:05:35.373 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.373 21:04:46 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 47459 00:05:35.373 21:04:46 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 47459 ']' 00:05:35.374 21:04:46 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.374 21:04:46 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:35.374 21:04:46 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.374 21:04:46 accel_rpc -- accel/accel_rpc.sh@13 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:05:35.374 21:04:46 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:35.374 21:04:46 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:35.374 [2024-07-14 21:04:46.894087] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:35.374 [2024-07-14 21:04:46.894345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:35.940 EAL: TSC is not safe to use in SMP mode 00:05:35.940 EAL: TSC is not invariant 00:05:35.940 [2024-07-14 21:04:47.426673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.198 [2024-07-14 21:04:47.502037] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:36.198 [2024-07-14 21:04:47.504490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.456 21:04:47 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:36.456 21:04:47 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:36.456 21:04:47 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:05:36.456 21:04:47 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:05:36.456 21:04:47 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:05:36.456 21:04:47 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:05:36.456 21:04:47 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:05:36.456 21:04:47 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.456 21:04:47 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.456 21:04:47 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.456 ************************************ 00:05:36.456 START TEST accel_assign_opcode 00:05:36.456 ************************************ 00:05:36.456 21:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:05:36.456 21:04:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:05:36.456 21:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.456 21:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:36.456 [2024-07-14 21:04:47.972862] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:05:36.456 21:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.456 21:04:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:05:36.456 21:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.456 21:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:36.456 [2024-07-14 21:04:47.980858] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:05:36.456 21:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.456 21:04:47 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:05:36.456 21:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.456 21:04:47 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:36.715 21:04:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.715 21:04:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:05:36.715 21:04:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:36.715 21:04:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:36.715 21:04:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:05:36.715 21:04:48 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:05:36.715 21:04:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:36.715 software 00:05:36.715 00:05:36.715 real 0m0.065s 00:05:36.715 user 0m0.005s 00:05:36.715 sys 0m0.013s 00:05:36.715 21:04:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.715 21:04:48 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:05:36.715 ************************************ 00:05:36.715 END TEST accel_assign_opcode 00:05:36.715 ************************************ 00:05:36.715 21:04:48 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:05:36.715 21:04:48 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 47459 00:05:36.715 21:04:48 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 47459 ']' 00:05:36.715 21:04:48 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 47459 00:05:36.715 21:04:48 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:05:36.715 21:04:48 accel_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:36.715 21:04:48 accel_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 47459 00:05:36.715 21:04:48 accel_rpc -- common/autotest_common.sh@956 -- # tail -1 00:05:36.715 21:04:48 accel_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:36.715 killing process with pid 47459 00:05:36.715 21:04:48 accel_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:36.715 21:04:48 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47459' 00:05:36.715 21:04:48 accel_rpc -- common/autotest_common.sh@967 -- # kill 47459 00:05:36.715 21:04:48 accel_rpc -- common/autotest_common.sh@972 -- # wait 47459 00:05:36.973 00:05:36.973 real 0m1.598s 00:05:36.973 user 0m1.499s 00:05:36.973 sys 0m0.770s 00:05:36.973 ************************************ 00:05:36.973 END TEST accel_rpc 00:05:36.973 ************************************ 00:05:36.973 21:04:48 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:36.973 21:04:48 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:36.973 21:04:48 -- common/autotest_common.sh@1142 -- # return 0 00:05:36.973 21:04:48 -- spdk/autotest.sh@185 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:36.973 21:04:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:36.973 21:04:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:36.973 21:04:48 -- common/autotest_common.sh@10 -- # set +x 00:05:36.973 ************************************ 00:05:36.973 START TEST app_cmdline 00:05:36.973 ************************************ 00:05:36.973 21:04:48 app_cmdline -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:37.231 * Looking for test storage... 00:05:37.231 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:37.231 21:04:48 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:37.231 21:04:48 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=47541 00:05:37.231 21:04:48 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 47541 00:05:37.231 21:04:48 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:37.231 21:04:48 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 47541 ']' 00:05:37.231 21:04:48 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.231 21:04:48 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.231 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.231 21:04:48 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.231 21:04:48 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.231 21:04:48 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:37.231 [2024-07-14 21:04:48.550117] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:37.231 [2024-07-14 21:04:48.550335] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:37.797 EAL: TSC is not safe to use in SMP mode 00:05:37.797 EAL: TSC is not invariant 00:05:37.797 [2024-07-14 21:04:49.079372] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:37.797 [2024-07-14 21:04:49.156270] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:37.797 [2024-07-14 21:04:49.158659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:38.055 21:04:49 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:38.055 21:04:49 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:05:38.055 21:04:49 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:38.313 { 00:05:38.313 "version": "SPDK v24.09-pre git sha1 719d03c6a", 00:05:38.313 "fields": { 00:05:38.313 "major": 24, 00:05:38.313 "minor": 9, 00:05:38.313 "patch": 0, 00:05:38.313 "suffix": "-pre", 00:05:38.313 "commit": "719d03c6a" 00:05:38.313 } 00:05:38.313 } 00:05:38.313 21:04:49 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:38.313 21:04:49 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:38.313 21:04:49 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:38.313 21:04:49 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:38.313 21:04:49 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:38.313 21:04:49 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:38.313 21:04:49 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:38.313 21:04:49 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:38.313 21:04:49 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:38.313 21:04:49 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:38.313 21:04:49 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:38.313 21:04:49 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:38.313 21:04:49 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:38.313 21:04:49 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:05:38.313 21:04:49 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:38.313 21:04:49 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.313 21:04:49 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.313 21:04:49 app_cmdline -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.313 21:04:49 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.313 21:04:49 app_cmdline -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.313 21:04:49 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:38.313 21:04:49 app_cmdline -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:38.313 21:04:49 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:38.313 21:04:49 app_cmdline -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:38.571 request: 00:05:38.571 { 00:05:38.571 "method": "env_dpdk_get_mem_stats", 00:05:38.571 "req_id": 1 00:05:38.571 } 00:05:38.571 Got JSON-RPC error response 00:05:38.571 response: 00:05:38.571 { 00:05:38.571 "code": -32601, 00:05:38.571 "message": "Method not found" 00:05:38.571 } 00:05:38.571 21:04:50 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:05:38.571 21:04:50 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:38.571 21:04:50 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:38.571 21:04:50 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:38.571 21:04:50 app_cmdline -- app/cmdline.sh@1 -- # killprocess 47541 00:05:38.571 21:04:50 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 47541 ']' 00:05:38.571 21:04:50 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 47541 00:05:38.571 21:04:50 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:05:38.571 21:04:50 app_cmdline -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:38.571 21:04:50 app_cmdline -- common/autotest_common.sh@956 -- # tail -1 00:05:38.571 21:04:50 app_cmdline -- common/autotest_common.sh@956 -- # ps -c -o command 47541 00:05:38.571 21:04:50 app_cmdline -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:38.571 21:04:50 app_cmdline -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:38.571 21:04:50 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47541' 00:05:38.571 killing process with pid 47541 00:05:38.571 21:04:50 app_cmdline -- common/autotest_common.sh@967 -- # kill 47541 00:05:38.571 21:04:50 app_cmdline -- common/autotest_common.sh@972 -- # wait 47541 00:05:38.829 00:05:38.829 real 0m1.905s 00:05:38.829 user 0m2.142s 00:05:38.829 sys 0m0.814s 00:05:38.829 21:04:50 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:38.829 ************************************ 00:05:38.829 END TEST app_cmdline 00:05:38.829 ************************************ 00:05:38.829 21:04:50 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:38.829 21:04:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:38.829 21:04:50 -- spdk/autotest.sh@186 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:38.829 21:04:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:38.829 21:04:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:38.829 21:04:50 -- common/autotest_common.sh@10 -- # set +x 00:05:38.830 ************************************ 00:05:38.830 START TEST version 00:05:38.830 ************************************ 00:05:38.830 21:04:50 version -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:39.088 * Looking for test storage... 00:05:39.088 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:39.088 21:04:50 version -- app/version.sh@17 -- # get_header_version major 00:05:39.088 21:04:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:39.088 21:04:50 version -- app/version.sh@14 -- # cut -f2 00:05:39.088 21:04:50 version -- app/version.sh@14 -- # tr -d '"' 00:05:39.088 21:04:50 version -- app/version.sh@17 -- # major=24 00:05:39.088 21:04:50 version -- app/version.sh@18 -- # get_header_version minor 00:05:39.088 21:04:50 version -- app/version.sh@14 -- # cut -f2 00:05:39.088 21:04:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:39.088 21:04:50 version -- app/version.sh@14 -- # tr -d '"' 00:05:39.088 21:04:50 version -- app/version.sh@18 -- # minor=9 00:05:39.088 21:04:50 version -- app/version.sh@19 -- # get_header_version patch 00:05:39.088 21:04:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:39.088 21:04:50 version -- app/version.sh@14 -- # cut -f2 00:05:39.088 21:04:50 version -- app/version.sh@14 -- # tr -d '"' 00:05:39.088 21:04:50 version -- app/version.sh@19 -- # patch=0 00:05:39.088 21:04:50 version -- app/version.sh@20 -- # get_header_version suffix 00:05:39.088 21:04:50 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:39.088 21:04:50 version -- app/version.sh@14 -- # cut -f2 00:05:39.088 21:04:50 version -- app/version.sh@14 -- # tr -d '"' 00:05:39.088 21:04:50 version -- app/version.sh@20 -- # suffix=-pre 00:05:39.088 21:04:50 version -- app/version.sh@22 -- # version=24.9 00:05:39.088 21:04:50 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:39.088 21:04:50 version -- app/version.sh@28 -- # version=24.9rc0 00:05:39.088 21:04:50 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:39.088 21:04:50 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:39.088 21:04:50 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:39.088 21:04:50 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:39.088 00:05:39.088 real 0m0.180s 00:05:39.088 user 0m0.102s 00:05:39.088 sys 0m0.146s 00:05:39.088 21:04:50 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:39.088 21:04:50 version -- common/autotest_common.sh@10 -- # set +x 00:05:39.088 ************************************ 00:05:39.088 END TEST version 00:05:39.088 ************************************ 00:05:39.088 21:04:50 -- common/autotest_common.sh@1142 -- # return 0 00:05:39.088 21:04:50 -- spdk/autotest.sh@188 -- # '[' 1 -eq 1 ']' 00:05:39.088 21:04:50 -- spdk/autotest.sh@189 -- # run_test blockdev_general /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:39.088 21:04:50 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:39.088 21:04:50 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:39.088 21:04:50 -- common/autotest_common.sh@10 -- # set +x 00:05:39.088 ************************************ 00:05:39.088 START TEST blockdev_general 00:05:39.088 ************************************ 00:05:39.088 21:04:50 blockdev_general -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh 00:05:39.346 * Looking for test storage... 00:05:39.346 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:39.346 21:04:50 blockdev_general -- bdev/nbd_common.sh@6 -- # set -e 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@20 -- # : 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@674 -- # uname -s 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@682 -- # test_type=bdev 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@683 -- # crypto_device= 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@684 -- # dek= 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@685 -- # env_ctx= 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@690 -- # [[ bdev == bdev ]] 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@691 -- # wait_for_rpc=--wait-for-rpc 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=47676 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:39.346 21:04:50 blockdev_general -- bdev/blockdev.sh@49 -- # waitforlisten 47676 00:05:39.347 21:04:50 blockdev_general -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' --wait-for-rpc 00:05:39.347 21:04:50 blockdev_general -- common/autotest_common.sh@829 -- # '[' -z 47676 ']' 00:05:39.347 21:04:50 blockdev_general -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.347 21:04:50 blockdev_general -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:39.347 21:04:50 blockdev_general -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.347 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.347 21:04:50 blockdev_general -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:39.347 21:04:50 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:39.347 [2024-07-14 21:04:50.731175] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:39.347 [2024-07-14 21:04:50.731424] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:39.912 EAL: TSC is not safe to use in SMP mode 00:05:39.912 EAL: TSC is not invariant 00:05:39.912 [2024-07-14 21:04:51.282485] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:39.912 [2024-07-14 21:04:51.362747] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:39.912 [2024-07-14 21:04:51.365143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:40.479 21:04:51 blockdev_general -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:40.479 21:04:51 blockdev_general -- common/autotest_common.sh@862 -- # return 0 00:05:40.479 21:04:51 blockdev_general -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:05:40.479 21:04:51 blockdev_general -- bdev/blockdev.sh@696 -- # setup_bdev_conf 00:05:40.479 21:04:51 blockdev_general -- bdev/blockdev.sh@53 -- # rpc_cmd 00:05:40.479 21:04:51 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.479 21:04:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:40.479 [2024-07-14 21:04:51.789202] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:40.479 [2024-07-14 21:04:51.789268] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:40.479 00:05:40.479 [2024-07-14 21:04:51.797171] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:40.479 [2024-07-14 21:04:51.797232] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:40.479 00:05:40.479 Malloc0 00:05:40.479 Malloc1 00:05:40.479 Malloc2 00:05:40.479 Malloc3 00:05:40.479 Malloc4 00:05:40.479 Malloc5 00:05:40.479 Malloc6 00:05:40.479 Malloc7 00:05:40.479 Malloc8 00:05:40.479 Malloc9 00:05:40.479 [2024-07-14 21:04:51.885168] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:40.479 [2024-07-14 21:04:51.885236] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:40.479 [2024-07-14 21:04:51.885266] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x16dde603a980 00:05:40.479 [2024-07-14 21:04:51.885273] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:40.479 [2024-07-14 21:04:51.885683] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:40.479 [2024-07-14 21:04:51.885709] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:40.479 TestPT 00:05:40.479 21:04:51 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.479 21:04:51 blockdev_general -- bdev/blockdev.sh@76 -- # dd if=/dev/zero of=/home/vagrant/spdk_repo/spdk/test/bdev/aiofile bs=2048 count=5000 00:05:40.479 5000+0 records in 00:05:40.479 5000+0 records out 00:05:40.479 10240000 bytes transferred in 0.023414 secs (437340490 bytes/sec) 00:05:40.479 21:04:51 blockdev_general -- bdev/blockdev.sh@77 -- # rpc_cmd bdev_aio_create /home/vagrant/spdk_repo/spdk/test/bdev/aiofile AIO0 2048 00:05:40.479 21:04:51 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.479 21:04:51 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:40.479 AIO0 00:05:40.479 21:04:52 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.479 21:04:52 blockdev_general -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:05:40.479 21:04:52 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.479 21:04:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:40.479 21:04:52 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.479 21:04:52 blockdev_general -- bdev/blockdev.sh@740 -- # cat 00:05:40.738 21:04:52 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:05:40.738 21:04:52 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.738 21:04:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:40.738 21:04:52 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.738 21:04:52 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:05:40.738 21:04:52 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.738 21:04:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:40.738 21:04:52 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.738 21:04:52 blockdev_general -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:40.738 21:04:52 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.738 21:04:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:40.738 21:04:52 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.738 21:04:52 blockdev_general -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:05:40.738 21:04:52 blockdev_general -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:05:40.738 21:04:52 blockdev_general -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:05:40.738 21:04:52 blockdev_general -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:40.738 21:04:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:40.998 21:04:52 blockdev_general -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:40.998 21:04:52 blockdev_general -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:05:40.998 21:04:52 blockdev_general -- bdev/blockdev.sh@749 -- # jq -r .name 00:05:40.999 21:04:52 blockdev_general -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "b66438ca-4224-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b66438ca-4224-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "25576fe3-968c-c556-a59f-74d2151f082d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "25576fe3-968c-c556-a59f-74d2151f082d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "0c4cb360-dded-3e58-a1c4-ca13f4906ec1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0c4cb360-dded-3e58-a1c4-ca13f4906ec1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "12b6a24d-4aa0-0753-b327-50cedc0f1740"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "12b6a24d-4aa0-0753-b327-50cedc0f1740",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "68eba0c0-7415-bd56-9ab1-31dbecd15e08"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "68eba0c0-7415-bd56-9ab1-31dbecd15e08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "56c76647-e945-3c59-b874-21fd4dda1333"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "56c76647-e945-3c59-b874-21fd4dda1333",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "81c49e4b-d901-175b-977e-4bc33cbc923d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "81c49e4b-d901-175b-977e-4bc33cbc923d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "fc2747a9-879b-5d5f-ba69-81204fd3b631"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fc2747a9-879b-5d5f-ba69-81204fd3b631",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "b410fd06-a1d2-ec5c-aa8c-214b0c919a38"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b410fd06-a1d2-ec5c-aa8c-214b0c919a38",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "e7081617-7125-0a5a-a6f9-24cf5c86b134"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e7081617-7125-0a5a-a6f9-24cf5c86b134",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "abaa31ba-6443-a05e-a947-e52f66c33b38"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "abaa31ba-6443-a05e-a947-e52f66c33b38",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "4f2802dc-f8aa-5a59-84c7-a2a91807a246"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4f2802dc-f8aa-5a59-84c7-a2a91807a246",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "b671b1d3-4224-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b671b1d3-4224-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b671b1d3-4224-11ef-aa83-81fbc7dfef58",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "b6691969-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "b66a51e5-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "b672ded7-4224-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b672ded7-4224-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b672ded7-4224-11ef-aa83-81fbc7dfef58",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b66b8a64-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "b66cc2f1-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b67416a7-4224-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b67416a7-4224-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b67416a7-4224-11ef-aa83-81fbc7dfef58",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "b66dfb67-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b66f33e7-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b67c07fe-4224-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b67c07fe-4224-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:40.999 21:04:52 blockdev_general -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:05:40.999 21:04:52 blockdev_general -- bdev/blockdev.sh@752 -- # hello_world_bdev=Malloc0 00:05:40.999 21:04:52 blockdev_general -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:05:40.999 21:04:52 blockdev_general -- bdev/blockdev.sh@754 -- # killprocess 47676 00:05:40.999 21:04:52 blockdev_general -- common/autotest_common.sh@948 -- # '[' -z 47676 ']' 00:05:40.999 21:04:52 blockdev_general -- common/autotest_common.sh@952 -- # kill -0 47676 00:05:40.999 21:04:52 blockdev_general -- common/autotest_common.sh@953 -- # uname 00:05:40.999 21:04:52 blockdev_general -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:40.999 21:04:52 blockdev_general -- common/autotest_common.sh@956 -- # tail -1 00:05:40.999 21:04:52 blockdev_general -- common/autotest_common.sh@956 -- # ps -c -o command 47676 00:05:40.999 21:04:52 blockdev_general -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:05:40.999 21:04:52 blockdev_general -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:05:40.999 killing process with pid 47676 00:05:40.999 21:04:52 blockdev_general -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47676' 00:05:40.999 21:04:52 blockdev_general -- common/autotest_common.sh@967 -- # kill 47676 00:05:40.999 21:04:52 blockdev_general -- common/autotest_common.sh@972 -- # wait 47676 00:05:41.257 21:04:52 blockdev_general -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:41.257 21:04:52 blockdev_general -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:41.257 21:04:52 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:05:41.257 21:04:52 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:41.257 21:04:52 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:41.258 ************************************ 00:05:41.258 START TEST bdev_hello_world 00:05:41.258 ************************************ 00:05:41.258 21:04:52 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Malloc0 '' 00:05:41.258 [2024-07-14 21:04:52.656297] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:41.258 [2024-07-14 21:04:52.656463] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:41.824 EAL: TSC is not safe to use in SMP mode 00:05:41.824 EAL: TSC is not invariant 00:05:41.824 [2024-07-14 21:04:53.160014] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:41.824 [2024-07-14 21:04:53.237098] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:41.824 [2024-07-14 21:04:53.239796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:41.824 [2024-07-14 21:04:53.297776] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:41.824 [2024-07-14 21:04:53.297825] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:41.824 [2024-07-14 21:04:53.305760] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:41.824 [2024-07-14 21:04:53.305794] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:41.824 [2024-07-14 21:04:53.313772] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:41.824 [2024-07-14 21:04:53.313807] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:41.824 [2024-07-14 21:04:53.313830] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:41.824 [2024-07-14 21:04:53.361778] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:41.824 [2024-07-14 21:04:53.361831] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:41.824 [2024-07-14 21:04:53.361856] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x649f9836800 00:05:41.824 [2024-07-14 21:04:53.361863] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:41.824 [2024-07-14 21:04:53.362278] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:41.824 [2024-07-14 21:04:53.362314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:42.082 [2024-07-14 21:04:53.461861] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:42.082 [2024-07-14 21:04:53.461922] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Malloc0 00:05:42.082 [2024-07-14 21:04:53.461949] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:42.082 [2024-07-14 21:04:53.461962] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:42.082 [2024-07-14 21:04:53.461974] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:42.082 [2024-07-14 21:04:53.461981] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:42.082 [2024-07-14 21:04:53.461992] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:42.082 00:05:42.082 [2024-07-14 21:04:53.462000] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:42.340 00:05:42.340 real 0m1.045s 00:05:42.340 user 0m0.500s 00:05:42.340 sys 0m0.543s 00:05:42.340 21:04:53 blockdev_general.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.340 ************************************ 00:05:42.340 END TEST bdev_hello_world 00:05:42.340 ************************************ 00:05:42.340 21:04:53 blockdev_general.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:42.341 21:04:53 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:42.341 21:04:53 blockdev_general -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:05:42.341 21:04:53 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:42.341 21:04:53 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.341 21:04:53 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:42.341 ************************************ 00:05:42.341 START TEST bdev_bounds 00:05:42.341 ************************************ 00:05:42.341 21:04:53 blockdev_general.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:05:42.341 21:04:53 blockdev_general.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=47728 00:05:42.341 Process bdevio pid: 47728 00:05:42.341 21:04:53 blockdev_general.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:42.341 21:04:53 blockdev_general.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:42.341 21:04:53 blockdev_general.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 47728' 00:05:42.341 21:04:53 blockdev_general.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 47728 00:05:42.341 21:04:53 blockdev_general.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 47728 ']' 00:05:42.341 21:04:53 blockdev_general.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.341 21:04:53 blockdev_general.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.341 21:04:53 blockdev_general.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.341 21:04:53 blockdev_general.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.341 21:04:53 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:42.341 [2024-07-14 21:04:53.757779] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:05:42.341 [2024-07-14 21:04:53.757968] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:05:42.908 EAL: TSC is not safe to use in SMP mode 00:05:42.908 EAL: TSC is not invariant 00:05:42.908 [2024-07-14 21:04:54.290314] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:42.908 [2024-07-14 21:04:54.373730] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:05:42.908 [2024-07-14 21:04:54.373775] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:05:42.908 [2024-07-14 21:04:54.373798] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:05:42.908 [2024-07-14 21:04:54.377357] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:42.908 [2024-07-14 21:04:54.377244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.908 [2024-07-14 21:04:54.377351] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:42.908 [2024-07-14 21:04:54.435511] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:42.908 [2024-07-14 21:04:54.435565] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:05:42.908 [2024-07-14 21:04:54.443501] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:42.908 [2024-07-14 21:04:54.443539] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:05:42.908 [2024-07-14 21:04:54.451516] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:42.909 [2024-07-14 21:04:54.451577] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:05:42.909 [2024-07-14 21:04:54.451602] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:05:43.167 [2024-07-14 21:04:54.499521] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:05:43.167 [2024-07-14 21:04:54.499576] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:05:43.167 [2024-07-14 21:04:54.499602] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3210d7a36800 00:05:43.167 [2024-07-14 21:04:54.499610] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:05:43.167 [2024-07-14 21:04:54.500011] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:05:43.167 [2024-07-14 21:04:54.500036] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:05:43.426 21:04:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.426 21:04:54 blockdev_general.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:05:43.426 21:04:54 blockdev_general.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:43.426 I/O targets: 00:05:43.426 Malloc0: 65536 blocks of 512 bytes (32 MiB) 00:05:43.426 Malloc1p0: 32768 blocks of 512 bytes (16 MiB) 00:05:43.426 Malloc1p1: 32768 blocks of 512 bytes (16 MiB) 00:05:43.426 Malloc2p0: 8192 blocks of 512 bytes (4 MiB) 00:05:43.426 Malloc2p1: 8192 blocks of 512 bytes (4 MiB) 00:05:43.426 Malloc2p2: 8192 blocks of 512 bytes (4 MiB) 00:05:43.426 Malloc2p3: 8192 blocks of 512 bytes (4 MiB) 00:05:43.426 Malloc2p4: 8192 blocks of 512 bytes (4 MiB) 00:05:43.426 Malloc2p5: 8192 blocks of 512 bytes (4 MiB) 00:05:43.426 Malloc2p6: 8192 blocks of 512 bytes (4 MiB) 00:05:43.426 Malloc2p7: 8192 blocks of 512 bytes (4 MiB) 00:05:43.426 TestPT: 65536 blocks of 512 bytes (32 MiB) 00:05:43.426 raid0: 131072 blocks of 512 bytes (64 MiB) 00:05:43.426 concat0: 131072 blocks of 512 bytes (64 MiB) 00:05:43.426 raid1: 65536 blocks of 512 bytes (32 MiB) 00:05:43.426 AIO0: 5000 blocks of 2048 bytes (10 MiB) 00:05:43.426 00:05:43.426 00:05:43.426 CUnit - A unit testing framework for C - Version 2.1-3 00:05:43.426 http://cunit.sourceforge.net/ 00:05:43.426 00:05:43.426 00:05:43.426 Suite: bdevio tests on: AIO0 00:05:43.426 Test: blockdev write read block ...passed 00:05:43.426 Test: blockdev write zeroes read block ...passed 00:05:43.426 Test: blockdev write zeroes read no split ...passed 00:05:43.426 Test: blockdev write zeroes read split ...passed 00:05:43.426 Test: blockdev write zeroes read split partial ...passed 00:05:43.426 Test: blockdev reset ...passed 00:05:43.426 Test: blockdev write read 8 blocks ...passed 00:05:43.686 Test: blockdev write read size > 128k ...passed 00:05:43.686 Test: blockdev write read invalid size ...passed 00:05:43.686 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.686 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.686 Test: blockdev write read max offset ...passed 00:05:43.686 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.686 Test: blockdev writev readv 8 blocks ...passed 00:05:43.686 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.686 Test: blockdev writev readv block ...passed 00:05:43.686 Test: blockdev writev readv size > 128k ...passed 00:05:43.686 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.686 Test: blockdev comparev and writev ...passed 00:05:43.686 Test: blockdev nvme passthru rw ...passed 00:05:43.686 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.686 Test: blockdev nvme admin passthru ...passed 00:05:43.686 Test: blockdev copy ...passed 00:05:43.686 Suite: bdevio tests on: raid1 00:05:43.686 Test: blockdev write read block ...passed 00:05:43.686 Test: blockdev write zeroes read block ...passed 00:05:43.686 Test: blockdev write zeroes read no split ...passed 00:05:43.686 Test: blockdev write zeroes read split ...passed 00:05:43.686 Test: blockdev write zeroes read split partial ...passed 00:05:43.686 Test: blockdev reset ...passed 00:05:43.686 Test: blockdev write read 8 blocks ...passed 00:05:43.686 Test: blockdev write read size > 128k ...passed 00:05:43.686 Test: blockdev write read invalid size ...passed 00:05:43.686 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.686 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.686 Test: blockdev write read max offset ...passed 00:05:43.686 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.686 Test: blockdev writev readv 8 blocks ...passed 00:05:43.686 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.686 Test: blockdev writev readv block ...passed 00:05:43.686 Test: blockdev writev readv size > 128k ...passed 00:05:43.686 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.686 Test: blockdev comparev and writev ...passed 00:05:43.686 Test: blockdev nvme passthru rw ...passed 00:05:43.686 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.686 Test: blockdev nvme admin passthru ...passed 00:05:43.686 Test: blockdev copy ...passed 00:05:43.686 Suite: bdevio tests on: concat0 00:05:43.686 Test: blockdev write read block ...passed 00:05:43.686 Test: blockdev write zeroes read block ...passed 00:05:43.686 Test: blockdev write zeroes read no split ...passed 00:05:43.686 Test: blockdev write zeroes read split ...passed 00:05:43.686 Test: blockdev write zeroes read split partial ...passed 00:05:43.686 Test: blockdev reset ...passed 00:05:43.686 Test: blockdev write read 8 blocks ...passed 00:05:43.686 Test: blockdev write read size > 128k ...passed 00:05:43.686 Test: blockdev write read invalid size ...passed 00:05:43.686 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.686 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.686 Test: blockdev write read max offset ...passed 00:05:43.686 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.686 Test: blockdev writev readv 8 blocks ...passed 00:05:43.686 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.687 Test: blockdev writev readv block ...passed 00:05:43.687 Test: blockdev writev readv size > 128k ...passed 00:05:43.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.687 Test: blockdev comparev and writev ...passed 00:05:43.687 Test: blockdev nvme passthru rw ...passed 00:05:43.687 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.687 Test: blockdev nvme admin passthru ...passed 00:05:43.687 Test: blockdev copy ...passed 00:05:43.687 Suite: bdevio tests on: raid0 00:05:43.687 Test: blockdev write read block ...passed 00:05:43.687 Test: blockdev write zeroes read block ...passed 00:05:43.687 Test: blockdev write zeroes read no split ...passed 00:05:43.687 Test: blockdev write zeroes read split ...passed 00:05:43.687 Test: blockdev write zeroes read split partial ...passed 00:05:43.687 Test: blockdev reset ...passed 00:05:43.687 Test: blockdev write read 8 blocks ...passed 00:05:43.687 Test: blockdev write read size > 128k ...passed 00:05:43.687 Test: blockdev write read invalid size ...passed 00:05:43.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.687 Test: blockdev write read max offset ...passed 00:05:43.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.687 Test: blockdev writev readv 8 blocks ...passed 00:05:43.687 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.687 Test: blockdev writev readv block ...passed 00:05:43.687 Test: blockdev writev readv size > 128k ...passed 00:05:43.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.687 Test: blockdev comparev and writev ...passed 00:05:43.687 Test: blockdev nvme passthru rw ...passed 00:05:43.687 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.687 Test: blockdev nvme admin passthru ...passed 00:05:43.687 Test: blockdev copy ...passed 00:05:43.687 Suite: bdevio tests on: TestPT 00:05:43.687 Test: blockdev write read block ...passed 00:05:43.687 Test: blockdev write zeroes read block ...passed 00:05:43.687 Test: blockdev write zeroes read no split ...passed 00:05:43.687 Test: blockdev write zeroes read split ...passed 00:05:43.687 Test: blockdev write zeroes read split partial ...passed 00:05:43.687 Test: blockdev reset ...passed 00:05:43.687 Test: blockdev write read 8 blocks ...passed 00:05:43.687 Test: blockdev write read size > 128k ...passed 00:05:43.687 Test: blockdev write read invalid size ...passed 00:05:43.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.687 Test: blockdev write read max offset ...passed 00:05:43.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.687 Test: blockdev writev readv 8 blocks ...passed 00:05:43.687 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.687 Test: blockdev writev readv block ...passed 00:05:43.687 Test: blockdev writev readv size > 128k ...passed 00:05:43.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.687 Test: blockdev comparev and writev ...passed 00:05:43.687 Test: blockdev nvme passthru rw ...passed 00:05:43.687 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.687 Test: blockdev nvme admin passthru ...passed 00:05:43.687 Test: blockdev copy ...passed 00:05:43.687 Suite: bdevio tests on: Malloc2p7 00:05:43.687 Test: blockdev write read block ...passed 00:05:43.687 Test: blockdev write zeroes read block ...passed 00:05:43.687 Test: blockdev write zeroes read no split ...passed 00:05:43.687 Test: blockdev write zeroes read split ...passed 00:05:43.687 Test: blockdev write zeroes read split partial ...passed 00:05:43.687 Test: blockdev reset ...passed 00:05:43.687 Test: blockdev write read 8 blocks ...passed 00:05:43.687 Test: blockdev write read size > 128k ...passed 00:05:43.687 Test: blockdev write read invalid size ...passed 00:05:43.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.687 Test: blockdev write read max offset ...passed 00:05:43.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.687 Test: blockdev writev readv 8 blocks ...passed 00:05:43.687 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.687 Test: blockdev writev readv block ...passed 00:05:43.687 Test: blockdev writev readv size > 128k ...passed 00:05:43.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.687 Test: blockdev comparev and writev ...passed 00:05:43.687 Test: blockdev nvme passthru rw ...passed 00:05:43.687 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.687 Test: blockdev nvme admin passthru ...passed 00:05:43.687 Test: blockdev copy ...passed 00:05:43.687 Suite: bdevio tests on: Malloc2p6 00:05:43.687 Test: blockdev write read block ...passed 00:05:43.687 Test: blockdev write zeroes read block ...passed 00:05:43.687 Test: blockdev write zeroes read no split ...passed 00:05:43.687 Test: blockdev write zeroes read split ...passed 00:05:43.687 Test: blockdev write zeroes read split partial ...passed 00:05:43.687 Test: blockdev reset ...passed 00:05:43.687 Test: blockdev write read 8 blocks ...passed 00:05:43.687 Test: blockdev write read size > 128k ...passed 00:05:43.687 Test: blockdev write read invalid size ...passed 00:05:43.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.687 Test: blockdev write read max offset ...passed 00:05:43.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.687 Test: blockdev writev readv 8 blocks ...passed 00:05:43.687 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.687 Test: blockdev writev readv block ...passed 00:05:43.687 Test: blockdev writev readv size > 128k ...passed 00:05:43.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.687 Test: blockdev comparev and writev ...passed 00:05:43.687 Test: blockdev nvme passthru rw ...passed 00:05:43.687 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.687 Test: blockdev nvme admin passthru ...passed 00:05:43.687 Test: blockdev copy ...passed 00:05:43.687 Suite: bdevio tests on: Malloc2p5 00:05:43.687 Test: blockdev write read block ...passed 00:05:43.687 Test: blockdev write zeroes read block ...passed 00:05:43.687 Test: blockdev write zeroes read no split ...passed 00:05:43.687 Test: blockdev write zeroes read split ...passed 00:05:43.687 Test: blockdev write zeroes read split partial ...passed 00:05:43.687 Test: blockdev reset ...passed 00:05:43.687 Test: blockdev write read 8 blocks ...passed 00:05:43.687 Test: blockdev write read size > 128k ...passed 00:05:43.687 Test: blockdev write read invalid size ...passed 00:05:43.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.687 Test: blockdev write read max offset ...passed 00:05:43.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.687 Test: blockdev writev readv 8 blocks ...passed 00:05:43.687 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.687 Test: blockdev writev readv block ...passed 00:05:43.687 Test: blockdev writev readv size > 128k ...passed 00:05:43.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.687 Test: blockdev comparev and writev ...passed 00:05:43.687 Test: blockdev nvme passthru rw ...passed 00:05:43.687 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.687 Test: blockdev nvme admin passthru ...passed 00:05:43.687 Test: blockdev copy ...passed 00:05:43.687 Suite: bdevio tests on: Malloc2p4 00:05:43.687 Test: blockdev write read block ...passed 00:05:43.687 Test: blockdev write zeroes read block ...passed 00:05:43.687 Test: blockdev write zeroes read no split ...passed 00:05:43.687 Test: blockdev write zeroes read split ...passed 00:05:43.687 Test: blockdev write zeroes read split partial ...passed 00:05:43.687 Test: blockdev reset ...passed 00:05:43.687 Test: blockdev write read 8 blocks ...passed 00:05:43.687 Test: blockdev write read size > 128k ...passed 00:05:43.687 Test: blockdev write read invalid size ...passed 00:05:43.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.687 Test: blockdev write read max offset ...passed 00:05:43.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.687 Test: blockdev writev readv 8 blocks ...passed 00:05:43.687 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.687 Test: blockdev writev readv block ...passed 00:05:43.687 Test: blockdev writev readv size > 128k ...passed 00:05:43.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.687 Test: blockdev comparev and writev ...passed 00:05:43.687 Test: blockdev nvme passthru rw ...passed 00:05:43.687 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.687 Test: blockdev nvme admin passthru ...passed 00:05:43.687 Test: blockdev copy ...passed 00:05:43.687 Suite: bdevio tests on: Malloc2p3 00:05:43.687 Test: blockdev write read block ...passed 00:05:43.687 Test: blockdev write zeroes read block ...passed 00:05:43.687 Test: blockdev write zeroes read no split ...passed 00:05:43.687 Test: blockdev write zeroes read split ...passed 00:05:43.687 Test: blockdev write zeroes read split partial ...passed 00:05:43.687 Test: blockdev reset ...passed 00:05:43.687 Test: blockdev write read 8 blocks ...passed 00:05:43.687 Test: blockdev write read size > 128k ...passed 00:05:43.687 Test: blockdev write read invalid size ...passed 00:05:43.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.687 Test: blockdev write read max offset ...passed 00:05:43.687 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.687 Test: blockdev writev readv 8 blocks ...passed 00:05:43.687 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.687 Test: blockdev writev readv block ...passed 00:05:43.687 Test: blockdev writev readv size > 128k ...passed 00:05:43.687 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.687 Test: blockdev comparev and writev ...passed 00:05:43.687 Test: blockdev nvme passthru rw ...passed 00:05:43.687 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.687 Test: blockdev nvme admin passthru ...passed 00:05:43.687 Test: blockdev copy ...passed 00:05:43.687 Suite: bdevio tests on: Malloc2p2 00:05:43.687 Test: blockdev write read block ...passed 00:05:43.687 Test: blockdev write zeroes read block ...passed 00:05:43.687 Test: blockdev write zeroes read no split ...passed 00:05:43.687 Test: blockdev write zeroes read split ...passed 00:05:43.687 Test: blockdev write zeroes read split partial ...passed 00:05:43.687 Test: blockdev reset ...passed 00:05:43.687 Test: blockdev write read 8 blocks ...passed 00:05:43.687 Test: blockdev write read size > 128k ...passed 00:05:43.687 Test: blockdev write read invalid size ...passed 00:05:43.687 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.687 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.687 Test: blockdev write read max offset ...passed 00:05:43.688 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.688 Test: blockdev writev readv 8 blocks ...passed 00:05:43.688 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.688 Test: blockdev writev readv block ...passed 00:05:43.688 Test: blockdev writev readv size > 128k ...passed 00:05:43.688 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.688 Test: blockdev comparev and writev ...passed 00:05:43.688 Test: blockdev nvme passthru rw ...passed 00:05:43.688 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.688 Test: blockdev nvme admin passthru ...passed 00:05:43.688 Test: blockdev copy ...passed 00:05:43.688 Suite: bdevio tests on: Malloc2p1 00:05:43.688 Test: blockdev write read block ...passed 00:05:43.688 Test: blockdev write zeroes read block ...passed 00:05:43.688 Test: blockdev write zeroes read no split ...passed 00:05:43.688 Test: blockdev write zeroes read split ...passed 00:05:43.688 Test: blockdev write zeroes read split partial ...passed 00:05:43.688 Test: blockdev reset ...passed 00:05:43.688 Test: blockdev write read 8 blocks ...passed 00:05:43.688 Test: blockdev write read size > 128k ...passed 00:05:43.688 Test: blockdev write read invalid size ...passed 00:05:43.688 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.688 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.688 Test: blockdev write read max offset ...passed 00:05:43.688 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.688 Test: blockdev writev readv 8 blocks ...passed 00:05:43.688 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.688 Test: blockdev writev readv block ...passed 00:05:43.688 Test: blockdev writev readv size > 128k ...passed 00:05:43.688 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.688 Test: blockdev comparev and writev ...passed 00:05:43.688 Test: blockdev nvme passthru rw ...passed 00:05:43.688 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.688 Test: blockdev nvme admin passthru ...passed 00:05:43.688 Test: blockdev copy ...passed 00:05:43.688 Suite: bdevio tests on: Malloc2p0 00:05:43.688 Test: blockdev write read block ...passed 00:05:43.688 Test: blockdev write zeroes read block ...passed 00:05:43.688 Test: blockdev write zeroes read no split ...passed 00:05:43.688 Test: blockdev write zeroes read split ...passed 00:05:43.688 Test: blockdev write zeroes read split partial ...passed 00:05:43.688 Test: blockdev reset ...passed 00:05:43.688 Test: blockdev write read 8 blocks ...passed 00:05:43.688 Test: blockdev write read size > 128k ...passed 00:05:43.688 Test: blockdev write read invalid size ...passed 00:05:43.688 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.688 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.688 Test: blockdev write read max offset ...passed 00:05:43.688 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.688 Test: blockdev writev readv 8 blocks ...passed 00:05:43.688 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.688 Test: blockdev writev readv block ...passed 00:05:43.688 Test: blockdev writev readv size > 128k ...passed 00:05:43.688 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.688 Test: blockdev comparev and writev ...passed 00:05:43.688 Test: blockdev nvme passthru rw ...passed 00:05:43.688 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.688 Test: blockdev nvme admin passthru ...passed 00:05:43.688 Test: blockdev copy ...passed 00:05:43.688 Suite: bdevio tests on: Malloc1p1 00:05:43.688 Test: blockdev write read block ...passed 00:05:43.688 Test: blockdev write zeroes read block ...passed 00:05:43.688 Test: blockdev write zeroes read no split ...passed 00:05:43.688 Test: blockdev write zeroes read split ...passed 00:05:43.688 Test: blockdev write zeroes read split partial ...passed 00:05:43.688 Test: blockdev reset ...passed 00:05:43.688 Test: blockdev write read 8 blocks ...passed 00:05:43.688 Test: blockdev write read size > 128k ...passed 00:05:43.688 Test: blockdev write read invalid size ...passed 00:05:43.688 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.688 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.688 Test: blockdev write read max offset ...passed 00:05:43.688 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.688 Test: blockdev writev readv 8 blocks ...passed 00:05:43.688 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.688 Test: blockdev writev readv block ...passed 00:05:43.688 Test: blockdev writev readv size > 128k ...passed 00:05:43.688 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.688 Test: blockdev comparev and writev ...passed 00:05:43.688 Test: blockdev nvme passthru rw ...passed 00:05:43.688 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.688 Test: blockdev nvme admin passthru ...passed 00:05:43.688 Test: blockdev copy ...passed 00:05:43.688 Suite: bdevio tests on: Malloc1p0 00:05:43.688 Test: blockdev write read block ...passed 00:05:43.688 Test: blockdev write zeroes read block ...passed 00:05:43.688 Test: blockdev write zeroes read no split ...passed 00:05:43.688 Test: blockdev write zeroes read split ...passed 00:05:43.688 Test: blockdev write zeroes read split partial ...passed 00:05:43.688 Test: blockdev reset ...passed 00:05:43.688 Test: blockdev write read 8 blocks ...passed 00:05:43.688 Test: blockdev write read size > 128k ...passed 00:05:43.688 Test: blockdev write read invalid size ...passed 00:05:43.688 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.688 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.688 Test: blockdev write read max offset ...passed 00:05:43.688 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.688 Test: blockdev writev readv 8 blocks ...passed 00:05:43.688 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.688 Test: blockdev writev readv block ...passed 00:05:43.688 Test: blockdev writev readv size > 128k ...passed 00:05:43.688 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.688 Test: blockdev comparev and writev ...passed 00:05:43.688 Test: blockdev nvme passthru rw ...passed 00:05:43.688 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.688 Test: blockdev nvme admin passthru ...passed 00:05:43.688 Test: blockdev copy ...passed 00:05:43.688 Suite: bdevio tests on: Malloc0 00:05:43.688 Test: blockdev write read block ...passed 00:05:43.688 Test: blockdev write zeroes read block ...passed 00:05:43.688 Test: blockdev write zeroes read no split ...passed 00:05:43.688 Test: blockdev write zeroes read split ...passed 00:05:43.688 Test: blockdev write zeroes read split partial ...passed 00:05:43.688 Test: blockdev reset ...passed 00:05:43.688 Test: blockdev write read 8 blocks ...passed 00:05:43.688 Test: blockdev write read size > 128k ...passed 00:05:43.688 Test: blockdev write read invalid size ...passed 00:05:43.688 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:43.688 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:43.688 Test: blockdev write read max offset ...passed 00:05:43.688 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:43.688 Test: blockdev writev readv 8 blocks ...passed 00:05:43.688 Test: blockdev writev readv 30 x 1block ...passed 00:05:43.688 Test: blockdev writev readv block ...passed 00:05:43.688 Test: blockdev writev readv size > 128k ...passed 00:05:43.688 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:43.688 Test: blockdev comparev and writev ...passed 00:05:43.688 Test: blockdev nvme passthru rw ...passed 00:05:43.688 Test: blockdev nvme passthru vendor specific ...passed 00:05:43.688 Test: blockdev nvme admin passthru ...passed 00:05:43.688 Test: blockdev copy ...passed 00:05:43.688 00:05:43.688 Run Summary: Type Total Ran Passed Failed Inactive 00:05:43.688 suites 16 16 n/a 0 0 00:05:43.688 tests 368 368 368 0 0 00:05:43.688 asserts 2224 2224 2224 0 n/a 00:05:43.688 00:05:43.688 Elapsed time = 0.562 seconds 00:05:43.688 0 00:05:43.688 21:04:55 blockdev_general.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 47728 00:05:43.688 21:04:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 47728 ']' 00:05:43.688 21:04:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 47728 00:05:43.688 21:04:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:05:43.688 21:04:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:05:43.688 21:04:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # ps -c -o command 47728 00:05:43.688 21:04:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # tail -1 00:05:43.688 21:04:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=bdevio 00:05:43.688 21:04:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@958 -- # '[' bdevio = sudo ']' 00:05:43.688 21:04:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 47728' 00:05:43.688 killing process with pid 47728 00:05:43.688 21:04:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@967 -- # kill 47728 00:05:43.688 21:04:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@972 -- # wait 47728 00:05:43.947 21:04:55 blockdev_general.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:05:43.947 00:05:43.947 real 0m1.687s 00:05:43.947 user 0m3.256s 00:05:43.947 sys 0m0.793s 00:05:43.947 ************************************ 00:05:43.947 END TEST bdev_bounds 00:05:43.947 ************************************ 00:05:43.947 21:04:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.947 21:04:55 blockdev_general.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:43.947 21:04:55 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:43.947 21:04:55 blockdev_general -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:43.947 21:04:55 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:05:43.947 21:04:55 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:43.947 21:04:55 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:43.947 ************************************ 00:05:43.947 START TEST bdev_nbd 00:05:43.947 ************************************ 00:05:43.947 21:04:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Malloc0 Malloc1p0 Malloc1p1 Malloc2p0 Malloc2p1 Malloc2p2 Malloc2p3 Malloc2p4 Malloc2p5 Malloc2p6 Malloc2p7 TestPT raid0 concat0 raid1 AIO0' '' 00:05:43.947 21:04:55 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:05:43.947 21:04:55 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:05:43.947 21:04:55 blockdev_general.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:05:43.947 00:05:43.947 real 0m0.005s 00:05:43.947 user 0m0.001s 00:05:43.947 sys 0m0.007s 00:05:43.947 21:04:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:43.947 21:04:55 blockdev_general.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:43.947 ************************************ 00:05:43.947 END TEST bdev_nbd 00:05:43.947 ************************************ 00:05:44.206 21:04:55 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:05:44.206 21:04:55 blockdev_general -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:05:44.206 21:04:55 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = nvme ']' 00:05:44.206 21:04:55 blockdev_general -- bdev/blockdev.sh@764 -- # '[' bdev = gpt ']' 00:05:44.206 21:04:55 blockdev_general -- bdev/blockdev.sh@768 -- # run_test bdev_fio fio_test_suite '' 00:05:44.206 21:04:55 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:05:44.206 21:04:55 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.206 21:04:55 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:05:44.206 ************************************ 00:05:44.206 START TEST bdev_fio 00:05:44.206 ************************************ 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1123 -- # fio_test_suite '' 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@331 -- # local env_context 00:05:44.206 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@335 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@336 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # echo '' 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # sed s/--env-context=// 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@339 -- # env_context= 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- bdev/blockdev.sh@340 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=verify 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type=AIO 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z verify ']' 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' verify == verify ']' 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1314 -- # cat 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1323 -- # '[' AIO == AIO ']' 00:05:44.206 21:04:55 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # /usr/src/fio/fio --version 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1324 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1325 -- # echo serialize_overlap=1 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc0]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc0 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p0]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p0 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc1p1]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc1p1 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p0]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p0 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p1]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p1 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p2]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p2 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p3]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p3 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p4]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p4 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p5]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p5 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p6]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p6 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_Malloc2p7]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=Malloc2p7 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_TestPT]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=TestPT 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid0]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid0 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_concat0]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=concat0 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_raid1]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=raid1 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@341 -- # for b in "${bdevs_name[@]}" 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@342 -- # echo '[job_AIO0]' 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@343 -- # echo filename=AIO0 00:05:45.143 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@347 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:05:45.144 21:04:56 blockdev_general.bdev_fio -- bdev/blockdev.sh@349 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:45.144 21:04:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:45.144 21:04:56 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.144 21:04:56 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:45.144 ************************************ 00:05:45.144 START TEST bdev_fio_rw_verify 00:05:45.144 ************************************ 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1339 -- # local sanitizers 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # shift 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local asan_lib= 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libasan 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:45.144 21:04:56 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=2048 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:45.144 job_Malloc0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 job_Malloc1p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 job_Malloc1p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 job_Malloc2p0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 job_Malloc2p1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 job_Malloc2p2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 job_Malloc2p3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 job_Malloc2p4: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 job_Malloc2p5: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 job_Malloc2p6: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 job_Malloc2p7: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 job_TestPT: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 job_raid0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 job_concat0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 job_raid1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 job_AIO0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:45.144 fio-3.35 00:05:45.144 Starting 16 threads 00:05:45.712 EAL: TSC is not safe to use in SMP mode 00:05:45.712 EAL: TSC is not invariant 00:05:57.942 00:05:57.942 job_Malloc0: (groupid=0, jobs=16): err= 0: pid=101347: Sun Jul 14 21:05:07 2024 00:05:57.942 read: IOPS=254k, BW=993MiB/s (1041MB/s)(9930MiB/10005msec) 00:05:57.942 slat (nsec): min=260, max=102377k, avg=2414.19, stdev=268771.36 00:05:57.942 clat (nsec): min=765, max=141878k, avg=38791.34, stdev=1183306.90 00:05:57.942 lat (nsec): min=1708, max=141878k, avg=41205.53, stdev=1216331.45 00:05:57.942 clat percentiles (usec): 00:05:57.942 | 50.000th=[ 9], 99.000th=[ 750], 99.900th=[ 832], 00:05:57.942 | 99.990th=[ 93848], 99.999th=[103285] 00:05:57.942 write: IOPS=424k, BW=1657MiB/s (1738MB/s)(16.2GiB/10005msec); 0 zone resets 00:05:57.942 slat (nsec): min=524, max=2148.0M, avg=20845.35, stdev=1869464.23 00:05:57.942 clat (nsec): min=726, max=2151.8M, avg=100773.04, stdev=4516469.27 00:05:57.942 lat (usec): min=10, max=2151.8k, avg=121.62, stdev=4888.16 00:05:57.942 clat percentiles (usec): 00:05:57.942 | 50.000th=[ 46], 99.000th=[ 717], 99.900th=[ 1221], 00:05:57.942 | 99.990th=[ 94897], 99.999th=[181404] 00:05:57.942 bw ( MiB/s): min= 814, max= 2733, per=100.00%, avg=1677.46, stdev=40.75, samples=291 00:05:57.942 iops : min=208604, max=699734, avg=429429.21, stdev=10433.25, samples=291 00:05:57.942 lat (nsec) : 750=0.01%, 1000=0.01% 00:05:57.942 lat (usec) : 2=0.08%, 4=12.59%, 10=19.48%, 20=19.17%, 50=19.07% 00:05:57.942 lat (usec) : 100=27.12%, 250=1.01%, 500=0.03%, 750=0.60%, 1000=0.75% 00:05:57.942 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:05:57.942 lat (msec) : 100=0.02%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:05:57.942 lat (msec) : 2000=0.01%, >=2000=0.01% 00:05:57.942 cpu : usr=56.76%, sys=2.89%, ctx=1121097, majf=0, minf=635 00:05:57.942 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:05:57.942 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:57.942 complete : 0=0.0%, 4=89.0%, 8=11.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:05:57.942 issued rwts: total=2542113,4244249,0,0 short=0,0,0,0 dropped=0,0,0,0 00:05:57.942 latency : target=0, window=0, percentile=100.00%, depth=8 00:05:57.942 00:05:57.942 Run status group 0 (all jobs): 00:05:57.942 READ: bw=993MiB/s (1041MB/s), 993MiB/s-993MiB/s (1041MB/s-1041MB/s), io=9930MiB (10.4GB), run=10005-10005msec 00:05:57.942 WRITE: bw=1657MiB/s (1738MB/s), 1657MiB/s-1657MiB/s (1738MB/s-1738MB/s), io=16.2GiB (17.4GB), run=10005-10005msec 00:05:57.942 00:05:57.942 real 0m12.451s 00:05:57.942 user 1m34.991s 00:05:57.942 sys 0m8.341s 00:05:57.942 ************************************ 00:05:57.942 END TEST bdev_fio_rw_verify 00:05:57.942 ************************************ 00:05:57.942 21:05:08 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:57.942 21:05:08 blockdev_general.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:05:57.942 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:05:57.942 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f 00:05:57.942 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@351 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:57.942 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@354 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:05:57.942 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1280 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:57.942 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1281 -- # local workload=trim 00:05:57.943 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1282 -- # local bdev_type= 00:05:57.943 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1283 -- # local env_context= 00:05:57.943 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1284 -- # local fio_dir=/usr/src/fio 00:05:57.943 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1286 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:05:57.943 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1291 -- # '[' -z trim ']' 00:05:57.943 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -n '' ']' 00:05:57.943 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1299 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:05:57.943 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1301 -- # cat 00:05:57.943 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1313 -- # '[' trim == verify ']' 00:05:57.943 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1328 -- # '[' trim == trim ']' 00:05:57.943 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1329 -- # echo rw=trimwrite 00:05:57.943 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:57.944 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "b66438ca-4224-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b66438ca-4224-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "25576fe3-968c-c556-a59f-74d2151f082d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "25576fe3-968c-c556-a59f-74d2151f082d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "0c4cb360-dded-3e58-a1c4-ca13f4906ec1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0c4cb360-dded-3e58-a1c4-ca13f4906ec1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "12b6a24d-4aa0-0753-b327-50cedc0f1740"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "12b6a24d-4aa0-0753-b327-50cedc0f1740",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "68eba0c0-7415-bd56-9ab1-31dbecd15e08"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "68eba0c0-7415-bd56-9ab1-31dbecd15e08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "56c76647-e945-3c59-b874-21fd4dda1333"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "56c76647-e945-3c59-b874-21fd4dda1333",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "81c49e4b-d901-175b-977e-4bc33cbc923d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "81c49e4b-d901-175b-977e-4bc33cbc923d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "fc2747a9-879b-5d5f-ba69-81204fd3b631"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fc2747a9-879b-5d5f-ba69-81204fd3b631",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "b410fd06-a1d2-ec5c-aa8c-214b0c919a38"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b410fd06-a1d2-ec5c-aa8c-214b0c919a38",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "e7081617-7125-0a5a-a6f9-24cf5c86b134"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e7081617-7125-0a5a-a6f9-24cf5c86b134",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "abaa31ba-6443-a05e-a947-e52f66c33b38"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "abaa31ba-6443-a05e-a947-e52f66c33b38",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "4f2802dc-f8aa-5a59-84c7-a2a91807a246"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4f2802dc-f8aa-5a59-84c7-a2a91807a246",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "b671b1d3-4224-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b671b1d3-4224-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b671b1d3-4224-11ef-aa83-81fbc7dfef58",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "b6691969-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "b66a51e5-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "b672ded7-4224-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b672ded7-4224-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b672ded7-4224-11ef-aa83-81fbc7dfef58",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b66b8a64-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "b66cc2f1-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b67416a7-4224-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b67416a7-4224-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b67416a7-4224-11ef-aa83-81fbc7dfef58",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "b66dfb67-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b66f33e7-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b67c07fe-4224-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b67c07fe-4224-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:57.944 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@355 -- # [[ -n Malloc0 00:05:57.944 Malloc1p0 00:05:57.944 Malloc1p1 00:05:57.944 Malloc2p0 00:05:57.944 Malloc2p1 00:05:57.944 Malloc2p2 00:05:57.944 Malloc2p3 00:05:57.944 Malloc2p4 00:05:57.944 Malloc2p5 00:05:57.944 Malloc2p6 00:05:57.944 Malloc2p7 00:05:57.944 TestPT 00:05:57.944 raid0 00:05:57.944 concat0 ]] 00:05:57.944 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # printf '%s\n' '{' ' "name": "Malloc0",' ' "aliases": [' ' "b66438ca-4224-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "Malloc disk",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b66438ca-4224-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 20000,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {}' '}' '{' ' "name": "Malloc1p0",' ' "aliases": [' ' "25576fe3-968c-c556-a59f-74d2151f082d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "25576fe3-968c-c556-a59f-74d2151f082d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc1p1",' ' "aliases": [' ' "0c4cb360-dded-3e58-a1c4-ca13f4906ec1"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 32768,' ' "uuid": "0c4cb360-dded-3e58-a1c4-ca13f4906ec1",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc1",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p0",' ' "aliases": [' ' "12b6a24d-4aa0-0753-b327-50cedc0f1740"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "12b6a24d-4aa0-0753-b327-50cedc0f1740",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 0' ' }' ' }' '}' '{' ' "name": "Malloc2p1",' ' "aliases": [' ' "68eba0c0-7415-bd56-9ab1-31dbecd15e08"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "68eba0c0-7415-bd56-9ab1-31dbecd15e08",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 8192' ' }' ' }' '}' '{' ' "name": "Malloc2p2",' ' "aliases": [' ' "56c76647-e945-3c59-b874-21fd4dda1333"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "56c76647-e945-3c59-b874-21fd4dda1333",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 16384' ' }' ' }' '}' '{' ' "name": "Malloc2p3",' ' "aliases": [' ' "81c49e4b-d901-175b-977e-4bc33cbc923d"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "81c49e4b-d901-175b-977e-4bc33cbc923d",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 24576' ' }' ' }' '}' '{' ' "name": "Malloc2p4",' ' "aliases": [' ' "fc2747a9-879b-5d5f-ba69-81204fd3b631"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "fc2747a9-879b-5d5f-ba69-81204fd3b631",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 32768' ' }' ' }' '}' '{' ' "name": "Malloc2p5",' ' "aliases": [' ' "b410fd06-a1d2-ec5c-aa8c-214b0c919a38"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "b410fd06-a1d2-ec5c-aa8c-214b0c919a38",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 40960' ' }' ' }' '}' '{' ' "name": "Malloc2p6",' ' "aliases": [' ' "e7081617-7125-0a5a-a6f9-24cf5c86b134"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "e7081617-7125-0a5a-a6f9-24cf5c86b134",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 49152' ' }' ' }' '}' '{' ' "name": "Malloc2p7",' ' "aliases": [' ' "abaa31ba-6443-a05e-a947-e52f66c33b38"' ' ],' ' "product_name": "Split Disk",' ' "block_size": 512,' ' "num_blocks": 8192,' ' "uuid": "abaa31ba-6443-a05e-a947-e52f66c33b38",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "split": {' ' "base_bdev": "Malloc2",' ' "offset_blocks": 57344' ' }' ' }' '}' '{' ' "name": "TestPT",' ' "aliases": [' ' "4f2802dc-f8aa-5a59-84c7-a2a91807a246"' ' ],' ' "product_name": "passthru",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "4f2802dc-f8aa-5a59-84c7-a2a91807a246",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": true,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "passthru": {' ' "name": "TestPT",' ' "base_bdev_name": "Malloc3"' ' }' ' }' '}' '{' ' "name": "raid0",' ' "aliases": [' ' "b671b1d3-4224-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b671b1d3-4224-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b671b1d3-4224-11ef-aa83-81fbc7dfef58",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "raid0",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc4",' ' "uuid": "b6691969-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc5",' ' "uuid": "b66a51e5-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "concat0",' ' "aliases": [' ' "b672ded7-4224-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 131072,' ' "uuid": "b672ded7-4224-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b672ded7-4224-11ef-aa83-81fbc7dfef58",' ' "strip_size_kb": 64,' ' "state": "online",' ' "raid_level": "concat",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc6",' ' "uuid": "b66b8a64-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc7",' ' "uuid": "b66cc2f1-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "raid1",' ' "aliases": [' ' "b67416a7-4224-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "Raid Volume",' ' "block_size": 512,' ' "num_blocks": 65536,' ' "uuid": "b67416a7-4224-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "memory_domains": [' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' },' ' {' ' "dma_device_id": "system",' ' "dma_device_type": 1' ' },' ' {' ' "dma_device_id": "SPDK_ACCEL_DMA_DEVICE",' ' "dma_device_type": 2' ' }' ' ],' ' "driver_specific": {' ' "raid": {' ' "uuid": "b67416a7-4224-11ef-aa83-81fbc7dfef58",' ' "strip_size_kb": 0,' ' "state": "online",' ' "raid_level": "raid1",' ' "superblock": false,' ' "num_base_bdevs": 2,' ' "num_base_bdevs_discovered": 2,' ' "num_base_bdevs_operational": 2,' ' "base_bdevs_list": [' ' {' ' "name": "Malloc8",' ' "uuid": "b66dfb67-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' },' ' {' ' "name": "Malloc9",' ' "uuid": "b66f33e7-4224-11ef-aa83-81fbc7dfef58",' ' "is_configured": true,' ' "data_offset": 0,' ' "data_size": 65536' ' }' ' ]' ' }' ' }' '}' '{' ' "name": "AIO0",' ' "aliases": [' ' "b67c07fe-4224-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "AIO disk",' ' "block_size": 2048,' ' "num_blocks": 5000,' ' "uuid": "b67c07fe-4224-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "aio": {' ' "filename": "/home/vagrant/spdk_repo/spdk/test/bdev/aiofile",' ' "block_size_override": true,' ' "readonly": false,' ' "fallocate": false' ' }' ' }' '}' 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc0]' 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc0 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p0]' 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p0 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc1p1]' 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc1p1 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p0]' 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p0 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p1]' 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p1 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p2]' 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p2 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p3]' 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p3 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p4]' 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p4 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p5]' 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p5 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p6]' 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p6 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_Malloc2p7]' 00:05:57.945 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=Malloc2p7 00:05:57.946 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:57.946 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_TestPT]' 00:05:57.946 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=TestPT 00:05:57.946 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:57.946 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_raid0]' 00:05:57.946 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=raid0 00:05:57.946 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@356 -- # for b in $(printf '%s\n' "${bdevs[@]}" | jq -r 'select(.supported_io_types.unmap == true) | .name') 00:05:57.946 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@357 -- # echo '[job_concat0]' 00:05:57.946 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@358 -- # echo filename=concat0 00:05:57.946 21:05:08 blockdev_general.bdev_fio -- bdev/blockdev.sh@367 -- # run_test bdev_fio_trim fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:57.946 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:05:57.946 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:57.946 21:05:08 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:05:57.946 ************************************ 00:05:57.946 START TEST bdev_fio_trim 00:05:57.946 ************************************ 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1123 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1356 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1339 -- # local sanitizers 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1341 -- # shift 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1343 -- # local asan_lib= 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libasan 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1345 -- # asan_lib= 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:05:57.946 21:05:08 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --verify_state_save=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:05:57.946 job_Malloc0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:57.946 job_Malloc1p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:57.946 job_Malloc1p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:57.946 job_Malloc2p0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:57.946 job_Malloc2p1: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:57.946 job_Malloc2p2: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:57.946 job_Malloc2p3: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:57.946 job_Malloc2p4: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:57.946 job_Malloc2p5: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:57.946 job_Malloc2p6: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:57.946 job_Malloc2p7: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:57.946 job_TestPT: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:57.946 job_raid0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:57.946 job_concat0: (g=0): rw=trimwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:05:57.946 fio-3.35 00:05:57.946 Starting 14 threads 00:05:58.205 EAL: TSC is not safe to use in SMP mode 00:05:58.205 EAL: TSC is not invariant 00:06:10.444 00:06:10.444 job_Malloc0: (groupid=0, jobs=14): err= 0: pid=101366: Sun Jul 14 21:05:20 2024 00:06:10.444 write: IOPS=2672k, BW=10.2GiB/s (10.9GB/s)(102GiB/10001msec); 0 zone resets 00:06:10.444 slat (nsec): min=240, max=805989k, avg=1379.39, stdev=266122.39 00:06:10.444 clat (nsec): min=1201, max=1393.7M, avg=14613.65, stdev=1159219.41 00:06:10.445 lat (nsec): min=1737, max=1393.7M, avg=15993.04, stdev=1200947.25 00:06:10.445 clat percentiles (usec): 00:06:10.445 | 50.000th=[ 6], 99.000th=[ 21], 99.900th=[ 955], 99.990th=[ 3589], 00:06:10.445 | 99.999th=[94897] 00:06:10.445 bw ( MiB/s): min= 3222, max=16503, per=100.00%, avg=10631.27, stdev=314.65, samples=261 00:06:10.445 iops : min=824858, max=4224902, avg=2721605.07, stdev=80550.56, samples=261 00:06:10.445 trim: IOPS=2672k, BW=10.2GiB/s (10.9GB/s)(102GiB/10001msec); 0 zone resets 00:06:10.445 slat (nsec): min=506, max=149604k, avg=1201.51, stdev=172607.81 00:06:10.445 clat (nsec): min=340, max=1393.7M, avg=10723.81, stdev=1017192.06 00:06:10.445 lat (nsec): min=1566, max=1393.7M, avg=11925.32, stdev=1032420.18 00:06:10.445 clat percentiles (usec): 00:06:10.445 | 50.000th=[ 8], 99.000th=[ 19], 99.900th=[ 29], 99.990th=[ 48], 00:06:10.445 | 99.999th=[94897] 00:06:10.445 bw ( MiB/s): min= 3222, max=16503, per=100.00%, avg=10631.28, stdev=314.65, samples=261 00:06:10.445 iops : min=824858, max=4224900, avg=2721606.70, stdev=80550.55, samples=261 00:06:10.445 lat (nsec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:06:10.445 lat (usec) : 2=0.09%, 4=24.60%, 10=67.50%, 20=6.80%, 50=0.78% 00:06:10.445 lat (usec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.18% 00:06:10.445 lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%, 20=0.01%, 50=0.01% 00:06:10.445 lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.01% 00:06:10.445 lat (msec) : 2000=0.01% 00:06:10.445 cpu : usr=63.92%, sys=4.40%, ctx=1262717, majf=0, minf=0 00:06:10.445 IO depths : 1=12.5%, 2=25.0%, 4=50.0%, 8=12.6%, 16=0.0%, 32=0.0%, >=64=0.0% 00:06:10.445 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:06:10.445 complete : 0=0.0%, 4=88.9%, 8=11.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:06:10.445 issued rwts: total=0,26722082,26722085,0 short=0,0,0,0 dropped=0,0,0,0 00:06:10.445 latency : target=0, window=0, percentile=100.00%, depth=8 00:06:10.445 00:06:10.445 Run status group 0 (all jobs): 00:06:10.445 WRITE: bw=10.2GiB/s (10.9GB/s), 10.2GiB/s-10.2GiB/s (10.9GB/s-10.9GB/s), io=102GiB (109GB), run=10001-10001msec 00:06:10.445 TRIM: bw=10.2GiB/s (10.9GB/s), 10.2GiB/s-10.2GiB/s (10.9GB/s-10.9GB/s), io=102GiB (109GB), run=10001-10001msec 00:06:10.445 00:06:10.445 real 0m12.397s 00:06:10.445 user 1m34.822s 00:06:10.445 sys 0m9.003s 00:06:10.445 21:05:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.445 21:05:21 blockdev_general.bdev_fio.bdev_fio_trim -- common/autotest_common.sh@10 -- # set +x 00:06:10.445 ************************************ 00:06:10.445 END TEST bdev_fio_trim 00:06:10.445 ************************************ 00:06:10.445 21:05:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1142 -- # return 0 00:06:10.445 21:05:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@368 -- # rm -f 00:06:10.445 21:05:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@369 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:06:10.445 21:05:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@370 -- # popd 00:06:10.445 /home/vagrant/spdk_repo/spdk 00:06:10.445 21:05:21 blockdev_general.bdev_fio -- bdev/blockdev.sh@371 -- # trap - SIGINT SIGTERM EXIT 00:06:10.445 00:06:10.445 real 0m25.828s 00:06:10.445 user 3m10.077s 00:06:10.445 sys 0m18.002s 00:06:10.445 21:05:21 blockdev_general.bdev_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.445 21:05:21 blockdev_general.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:06:10.445 ************************************ 00:06:10.445 END TEST bdev_fio 00:06:10.445 ************************************ 00:06:10.445 21:05:21 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:10.445 21:05:21 blockdev_general -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:10.445 21:05:21 blockdev_general -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:10.445 21:05:21 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:06:10.445 21:05:21 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.445 21:05:21 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:10.445 ************************************ 00:06:10.445 START TEST bdev_verify 00:06:10.445 ************************************ 00:06:10.445 21:05:21 blockdev_general.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:10.445 [2024-07-14 21:05:21.423743] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:10.445 [2024-07-14 21:05:21.424014] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:10.445 EAL: TSC is not safe to use in SMP mode 00:06:10.445 EAL: TSC is not invariant 00:06:10.445 [2024-07-14 21:05:21.943694] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:10.703 [2024-07-14 21:05:22.048210] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:10.703 [2024-07-14 21:05:22.048279] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:10.703 [2024-07-14 21:05:22.051805] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.703 [2024-07-14 21:05:22.051792] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:10.703 [2024-07-14 21:05:22.110346] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:10.703 [2024-07-14 21:05:22.110378] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:10.703 [2024-07-14 21:05:22.118332] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:10.703 [2024-07-14 21:05:22.118370] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:10.703 [2024-07-14 21:05:22.126343] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:10.703 [2024-07-14 21:05:22.126363] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:10.703 [2024-07-14 21:05:22.126386] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:10.703 [2024-07-14 21:05:22.174352] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:10.703 [2024-07-14 21:05:22.174399] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:10.703 [2024-07-14 21:05:22.174425] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1762d3436800 00:06:10.703 [2024-07-14 21:05:22.174432] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:10.703 [2024-07-14 21:05:22.174833] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:10.703 [2024-07-14 21:05:22.174859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:10.962 Running I/O for 5 seconds... 00:06:16.230 00:06:16.230 Latency(us) 00:06:16.230 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:16.230 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x1000 00:06:16.230 Malloc0 : 5.03 6037.37 23.58 0.00 0.00 21174.42 58.18 47185.93 00:06:16.230 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x1000 length 0x1000 00:06:16.230 Malloc0 : 5.04 123.26 0.48 0.00 0.00 1037582.13 1027.72 1525201.92 00:06:16.230 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x800 00:06:16.230 Malloc1p0 : 5.01 6111.32 23.87 0.00 0.00 20933.25 245.76 18945.87 00:06:16.230 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x800 length 0x800 00:06:16.230 Malloc1p0 : 5.01 6741.09 26.33 0.00 0.00 18976.94 242.04 21924.78 00:06:16.230 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x800 00:06:16.230 Malloc1p1 : 5.02 6116.88 23.89 0.00 0.00 20910.46 264.38 18230.93 00:06:16.230 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x800 length 0x800 00:06:16.230 Malloc1p1 : 5.01 6740.71 26.33 0.00 0.00 18974.77 260.65 21448.15 00:06:16.230 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x200 00:06:16.230 Malloc2p0 : 5.02 6116.51 23.89 0.00 0.00 20907.83 251.35 17635.15 00:06:16.230 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x200 length 0x200 00:06:16.230 Malloc2p0 : 5.01 6740.33 26.33 0.00 0.00 18972.36 242.04 20614.06 00:06:16.230 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x200 00:06:16.230 Malloc2p1 : 5.02 6116.30 23.89 0.00 0.00 20904.98 260.65 17277.68 00:06:16.230 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x200 length 0x200 00:06:16.230 Malloc2p1 : 5.01 6739.96 26.33 0.00 0.00 18970.05 255.07 20018.28 00:06:16.230 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x200 00:06:16.230 Malloc2p2 : 5.02 6116.04 23.89 0.00 0.00 20902.10 247.62 16681.90 00:06:16.230 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x200 length 0x200 00:06:16.230 Malloc2p2 : 5.01 6739.56 26.33 0.00 0.00 18967.66 243.90 19303.34 00:06:16.230 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x200 00:06:16.230 Malloc2p3 : 5.02 6115.74 23.89 0.00 0.00 20898.96 258.79 16681.90 00:06:16.230 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x200 length 0x200 00:06:16.230 Malloc2p3 : 5.01 6739.19 26.32 0.00 0.00 18965.82 262.52 16801.05 00:06:16.230 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x200 00:06:16.230 Malloc2p4 : 5.02 6115.50 23.89 0.00 0.00 20896.42 279.27 16920.21 00:06:16.230 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x200 length 0x200 00:06:16.230 Malloc2p4 : 5.01 6738.83 26.32 0.00 0.00 18963.10 277.41 16562.74 00:06:16.230 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x200 00:06:16.230 Malloc2p5 : 5.02 6115.24 23.89 0.00 0.00 20893.36 253.21 17277.68 00:06:16.230 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x200 length 0x200 00:06:16.230 Malloc2p5 : 5.01 6738.44 26.32 0.00 0.00 18960.91 247.62 16801.05 00:06:16.230 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x200 00:06:16.230 Malloc2p6 : 5.02 6114.96 23.89 0.00 0.00 20890.84 249.48 17515.99 00:06:16.230 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x200 length 0x200 00:06:16.230 Malloc2p6 : 5.02 6738.08 26.32 0.00 0.00 18958.61 240.17 17635.15 00:06:16.230 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x200 00:06:16.230 Malloc2p7 : 5.02 6114.69 23.89 0.00 0.00 20888.39 245.76 18350.09 00:06:16.230 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x200 length 0x200 00:06:16.230 Malloc2p7 : 5.02 6737.79 26.32 0.00 0.00 18956.19 242.04 18469.24 00:06:16.230 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x1000 00:06:16.230 TestPT : 5.03 6082.73 23.76 0.00 0.00 20982.34 923.46 18588.40 00:06:16.230 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x1000 length 0x1000 00:06:16.230 TestPT : 5.03 4630.57 18.09 0.00 0.00 27565.10 25.37 90082.24 00:06:16.230 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x2000 00:06:16.230 raid0 : 5.02 6114.30 23.88 0.00 0.00 20880.62 262.52 18588.40 00:06:16.230 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x2000 length 0x2000 00:06:16.230 raid0 : 5.02 6737.42 26.32 0.00 0.00 18949.81 273.69 19541.65 00:06:16.230 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x2000 00:06:16.230 concat0 : 5.02 6114.08 23.88 0.00 0.00 20877.74 266.24 19303.34 00:06:16.230 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x2000 length 0x2000 00:06:16.230 concat0 : 5.02 6737.13 26.32 0.00 0.00 18946.87 251.35 20256.59 00:06:16.230 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x1000 00:06:16.230 raid1 : 5.02 6113.79 23.88 0.00 0.00 20874.76 392.84 19660.81 00:06:16.230 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x1000 length 0x1000 00:06:16.230 raid1 : 5.02 6760.60 26.41 0.00 0.00 18877.62 148.01 21209.84 00:06:16.230 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x0 length 0x4e2 00:06:16.230 AIO0 : 5.07 604.68 2.36 0.00 0.00 209857.28 584.61 308853.39 00:06:16.230 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:16.230 Verification LBA range: start 0x4e2 length 0x4e2 00:06:16.230 AIO0 : 5.07 607.66 2.37 0.00 0.00 208916.15 841.54 310759.89 00:06:16.230 =================================================================================================================== 00:06:16.230 Total : 185210.74 723.48 0.00 0.00 22084.62 25.37 1525201.92 00:06:16.230 00:06:16.230 real 0m6.198s 00:06:16.230 user 0m10.191s 00:06:16.230 sys 0m0.695s 00:06:16.230 21:05:27 blockdev_general.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:16.230 21:05:27 blockdev_general.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:16.230 ************************************ 00:06:16.230 END TEST bdev_verify 00:06:16.230 ************************************ 00:06:16.230 21:05:27 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:16.230 21:05:27 blockdev_general -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:16.230 21:05:27 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:06:16.230 21:05:27 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:16.230 21:05:27 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:16.230 ************************************ 00:06:16.230 START TEST bdev_verify_big_io 00:06:16.230 ************************************ 00:06:16.230 21:05:27 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:16.230 [2024-07-14 21:05:27.673270] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:16.230 [2024-07-14 21:05:27.673535] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:16.797 EAL: TSC is not safe to use in SMP mode 00:06:16.797 EAL: TSC is not invariant 00:06:16.797 [2024-07-14 21:05:28.228557] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:16.797 [2024-07-14 21:05:28.314087] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:16.797 [2024-07-14 21:05:28.314172] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:16.797 [2024-07-14 21:05:28.317183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:16.797 [2024-07-14 21:05:28.317179] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.058 [2024-07-14 21:05:28.376310] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:17.058 [2024-07-14 21:05:28.376392] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:17.058 [2024-07-14 21:05:28.384298] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:17.058 [2024-07-14 21:05:28.384333] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:17.058 [2024-07-14 21:05:28.392313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:17.058 [2024-07-14 21:05:28.392349] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:17.058 [2024-07-14 21:05:28.392371] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:17.058 [2024-07-14 21:05:28.440320] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:17.058 [2024-07-14 21:05:28.440387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:17.058 [2024-07-14 21:05:28.440412] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12c0d2836800 00:06:17.058 [2024-07-14 21:05:28.440419] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:17.058 [2024-07-14 21:05:28.440850] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:17.058 [2024-07-14 21:05:28.440876] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:17.058 [2024-07-14 21:05:28.541830] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.542125] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p0 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.542359] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.542573] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p1 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.542774] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.542959] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p2 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.543145] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.543327] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p3 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.543505] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.543694] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p4 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.543892] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.544074] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p5 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.544259] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.544449] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p6 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.544637] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.544822] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev Malloc2p7 simultaneously (32). Queue depth is limited to 32 00:06:17.058 [2024-07-14 21:05:28.546899] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:06:17.059 [2024-07-14 21:05:28.547128] bdevperf.c:1821:bdevperf_construct_job: *WARNING*: Due to constraints of verify job, queue depth (-q, 128) can't exceed the number of IO requests which can be submitted to the bdev AIO0 simultaneously (78). Queue depth is limited to 78 00:06:17.059 Running I/O for 5 seconds... 00:06:22.326 00:06:22.326 Latency(us) 00:06:22.326 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:22.326 Job: Malloc0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.326 Verification LBA range: start 0x0 length 0x100 00:06:22.326 Malloc0 : 5.06 3943.20 246.45 0.00 0.00 32382.39 82.39 102951.13 00:06:22.326 Job: Malloc0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.326 Verification LBA range: start 0x100 length 0x100 00:06:22.326 Malloc0 : 5.05 3947.70 246.73 0.00 0.00 32340.16 91.23 111530.39 00:06:22.326 Job: Malloc1p0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.326 Verification LBA range: start 0x0 length 0x80 00:06:22.326 Malloc1p0 : 5.07 1870.51 116.91 0.00 0.00 68113.24 863.88 131548.67 00:06:22.326 Job: Malloc1p0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.326 Verification LBA range: start 0x80 length 0x80 00:06:22.326 Malloc1p0 : 5.10 514.37 32.15 0.00 0.00 247575.81 420.77 293601.37 00:06:22.326 Job: Malloc1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.326 Verification LBA range: start 0x0 length 0x80 00:06:22.326 Malloc1p1 : 5.08 516.18 32.26 0.00 0.00 246663.24 292.31 295507.87 00:06:22.326 Job: Malloc1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.326 Verification LBA range: start 0x80 length 0x80 00:06:22.327 Malloc1p1 : 5.10 514.34 32.15 0.00 0.00 247229.86 381.67 285975.36 00:06:22.327 Job: Malloc2p0 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x0 length 0x20 00:06:22.327 Malloc2p0 : 5.07 498.90 31.18 0.00 0.00 63777.65 245.76 97708.25 00:06:22.327 Job: Malloc2p0 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x20 length 0x20 00:06:22.327 Malloc2p0 : 5.07 498.49 31.16 0.00 0.00 63715.68 376.09 101044.63 00:06:22.327 Job: Malloc2p1 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x0 length 0x20 00:06:22.327 Malloc2p1 : 5.07 498.88 31.18 0.00 0.00 63745.42 249.48 96755.00 00:06:22.327 Job: Malloc2p1 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x20 length 0x20 00:06:22.327 Malloc2p1 : 5.07 498.45 31.15 0.00 0.00 63690.52 389.12 100091.38 00:06:22.327 Job: Malloc2p2 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x0 length 0x20 00:06:22.327 Malloc2p2 : 5.07 498.85 31.18 0.00 0.00 63729.63 255.07 95801.75 00:06:22.327 Job: Malloc2p2 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x20 length 0x20 00:06:22.327 Malloc2p2 : 5.07 498.42 31.15 0.00 0.00 63676.82 385.40 98661.50 00:06:22.327 Job: Malloc2p3 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x0 length 0x20 00:06:22.327 Malloc2p3 : 5.07 498.83 31.18 0.00 0.00 63703.65 242.04 95325.12 00:06:22.327 Job: Malloc2p3 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x20 length 0x20 00:06:22.327 Malloc2p3 : 5.07 498.38 31.15 0.00 0.00 63647.50 385.40 97708.25 00:06:22.327 Job: Malloc2p4 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x0 length 0x20 00:06:22.327 Malloc2p4 : 5.07 498.81 31.18 0.00 0.00 63688.47 242.04 94371.87 00:06:22.327 Job: Malloc2p4 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x20 length 0x20 00:06:22.327 Malloc2p4 : 5.07 498.34 31.15 0.00 0.00 63630.22 463.59 96278.37 00:06:22.327 Job: Malloc2p5 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x0 length 0x20 00:06:22.327 Malloc2p5 : 5.07 498.79 31.17 0.00 0.00 63660.04 256.93 93418.62 00:06:22.327 Job: Malloc2p5 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x20 length 0x20 00:06:22.327 Malloc2p5 : 5.07 498.30 31.14 0.00 0.00 63601.46 344.44 94848.49 00:06:22.327 Job: Malloc2p6 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x0 length 0x20 00:06:22.327 Malloc2p6 : 5.07 498.76 31.17 0.00 0.00 63650.75 255.07 92465.37 00:06:22.327 Job: Malloc2p6 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x20 length 0x20 00:06:22.327 Malloc2p6 : 5.08 500.45 31.28 0.00 0.00 63322.39 392.84 93418.62 00:06:22.327 Job: Malloc2p7 (Core Mask 0x1, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x0 length 0x20 00:06:22.327 Malloc2p7 : 5.07 498.73 31.17 0.00 0.00 63620.01 251.35 91988.74 00:06:22.327 Job: Malloc2p7 (Core Mask 0x2, workload: verify, depth: 32, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x20 length 0x20 00:06:22.327 Malloc2p7 : 5.08 500.41 31.28 0.00 0.00 63302.51 383.53 92465.37 00:06:22.327 Job: TestPT (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x0 length 0x100 00:06:22.327 TestPT : 5.11 510.25 31.89 0.00 0.00 247475.06 5451.41 236406.30 00:06:22.327 Job: TestPT (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x100 length 0x100 00:06:22.327 TestPT : 5.20 278.39 17.40 0.00 0.00 452039.96 19899.12 526194.66 00:06:22.327 Job: raid0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x0 length 0x200 00:06:22.327 raid0 : 5.08 516.16 32.26 0.00 0.00 245330.55 363.05 280255.85 00:06:22.327 Job: raid0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x200 length 0x200 00:06:22.327 raid0 : 5.10 517.37 32.34 0.00 0.00 244243.06 439.39 265003.83 00:06:22.327 Job: concat0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x0 length 0x200 00:06:22.327 concat0 : 5.08 519.19 32.45 0.00 0.00 243592.59 390.98 272629.84 00:06:22.327 Job: concat0 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x200 length 0x200 00:06:22.327 concat0 : 5.10 517.34 32.33 0.00 0.00 243876.35 392.84 257377.82 00:06:22.327 Job: raid1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x0 length 0x100 00:06:22.327 raid1 : 5.09 519.17 32.45 0.00 0.00 243090.07 387.26 263097.33 00:06:22.327 Job: raid1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x100 length 0x100 00:06:22.327 raid1 : 5.10 523.10 32.69 0.00 0.00 240886.43 437.53 245938.81 00:06:22.327 Job: AIO0 (Core Mask 0x1, workload: verify, depth: 78, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x0 length 0x4e 00:06:22.327 AIO0 : 5.08 513.08 32.07 0.00 0.00 149733.10 543.65 158239.70 00:06:22.327 Job: AIO0 (Core Mask 0x2, workload: verify, depth: 78, IO size: 65536) 00:06:22.327 Verification LBA range: start 0x4e length 0x4e 00:06:22.327 AIO0 : 5.10 523.44 32.72 0.00 0.00 146450.97 480.35 147753.94 00:06:22.327 =================================================================================================================== 00:06:22.327 Total : 24225.59 1514.10 0.00 0.00 100883.46 82.39 526194.66 00:06:22.584 00:06:22.584 real 0m6.364s 00:06:22.584 user 0m11.141s 00:06:22.584 sys 0m0.812s 00:06:22.584 21:05:34 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.584 21:05:34 blockdev_general.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:22.584 ************************************ 00:06:22.584 END TEST bdev_verify_big_io 00:06:22.584 ************************************ 00:06:22.584 21:05:34 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:22.584 21:05:34 blockdev_general -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:22.584 21:05:34 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:22.584 21:05:34 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.584 21:05:34 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:22.584 ************************************ 00:06:22.584 START TEST bdev_write_zeroes 00:06:22.584 ************************************ 00:06:22.584 21:05:34 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:22.584 [2024-07-14 21:05:34.088670] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:22.584 [2024-07-14 21:05:34.088899] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:23.149 EAL: TSC is not safe to use in SMP mode 00:06:23.149 EAL: TSC is not invariant 00:06:23.149 [2024-07-14 21:05:34.596230] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:23.149 [2024-07-14 21:05:34.672230] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:23.149 [2024-07-14 21:05:34.674606] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:23.406 [2024-07-14 21:05:34.731782] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:23.406 [2024-07-14 21:05:34.731833] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc1 00:06:23.406 [2024-07-14 21:05:34.739772] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:23.406 [2024-07-14 21:05:34.739807] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc2 00:06:23.406 [2024-07-14 21:05:34.747786] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:23.406 [2024-07-14 21:05:34.747806] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: Malloc3 00:06:23.406 [2024-07-14 21:05:34.747828] vbdev_passthru.c: 735:bdev_passthru_create_disk: *NOTICE*: vbdev creation deferred pending base bdev arrival 00:06:23.406 [2024-07-14 21:05:34.795796] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc3 00:06:23.406 [2024-07-14 21:05:34.795859] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:06:23.406 [2024-07-14 21:05:34.795884] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x28e67c636800 00:06:23.406 [2024-07-14 21:05:34.795891] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:06:23.406 [2024-07-14 21:05:34.796324] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:06:23.406 [2024-07-14 21:05:34.796353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: TestPT 00:06:23.406 Running I/O for 1 seconds... 00:06:24.780 00:06:24.780 Latency(us) 00:06:24.780 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:24.780 Job: Malloc0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 Malloc0 : 1.01 29333.57 114.58 0.00 0.00 4362.86 204.80 7983.48 00:06:24.780 Job: Malloc1p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 Malloc1p0 : 1.01 29327.22 114.56 0.00 0.00 4360.99 228.07 7596.22 00:06:24.780 Job: Malloc1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 Malloc1p1 : 1.01 29324.37 114.55 0.00 0.00 4359.29 169.43 7357.91 00:06:24.780 Job: Malloc2p0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 Malloc2p0 : 1.01 29321.58 114.54 0.00 0.00 4358.22 175.94 7238.75 00:06:24.780 Job: Malloc2p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 Malloc2p1 : 1.01 29318.12 114.52 0.00 0.00 4356.57 179.67 7060.02 00:06:24.780 Job: Malloc2p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 Malloc2p2 : 1.01 29315.19 114.51 0.00 0.00 4355.16 217.83 7208.96 00:06:24.780 Job: Malloc2p3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 Malloc2p3 : 1.01 29311.61 114.50 0.00 0.00 4354.18 175.01 7030.23 00:06:24.780 Job: Malloc2p4 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 Malloc2p4 : 1.01 29307.60 114.48 0.00 0.00 4352.89 170.36 6911.07 00:06:24.780 Job: Malloc2p5 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 Malloc2p5 : 1.01 29304.78 114.47 0.00 0.00 4351.33 178.73 6791.91 00:06:24.780 Job: Malloc2p6 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 Malloc2p6 : 1.01 29301.82 114.46 0.00 0.00 4349.94 175.01 6613.18 00:06:24.780 Job: Malloc2p7 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 Malloc2p7 : 1.01 29298.40 114.45 0.00 0.00 4349.20 168.49 6553.60 00:06:24.780 Job: TestPT (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 TestPT : 1.01 29295.12 114.43 0.00 0.00 4347.79 166.63 6404.66 00:06:24.780 Job: raid0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 raid0 : 1.01 29290.07 114.41 0.00 0.00 4346.05 240.17 6196.13 00:06:24.780 Job: concat0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 concat0 : 1.01 29286.85 114.40 0.00 0.00 4344.60 271.83 5957.82 00:06:24.780 Job: raid1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 raid1 : 1.01 29278.47 114.37 0.00 0.00 4341.95 433.80 5481.19 00:06:24.780 Job: AIO0 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:24.780 AIO0 : 1.07 2201.59 8.60 0.00 0.00 56142.37 696.32 203042.51 00:06:24.780 =================================================================================================================== 00:06:24.781 Total : 441816.35 1725.85 0.00 0.00 4625.58 166.63 203042.51 00:06:24.781 00:06:24.781 real 0m2.138s 00:06:24.781 user 0m1.468s 00:06:24.781 sys 0m0.560s 00:06:24.781 21:05:36 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.781 21:05:36 blockdev_general.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:24.781 ************************************ 00:06:24.781 END TEST bdev_write_zeroes 00:06:24.781 ************************************ 00:06:24.781 21:05:36 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:24.781 21:05:36 blockdev_general -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:24.781 21:05:36 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:24.781 21:05:36 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.781 21:05:36 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:24.781 ************************************ 00:06:24.781 START TEST bdev_json_nonenclosed 00:06:24.781 ************************************ 00:06:24.781 21:05:36 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:24.781 [2024-07-14 21:05:36.280911] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:24.781 [2024-07-14 21:05:36.281178] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:25.349 EAL: TSC is not safe to use in SMP mode 00:06:25.349 EAL: TSC is not invariant 00:06:25.349 [2024-07-14 21:05:36.787215] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.349 [2024-07-14 21:05:36.871688] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:25.349 [2024-07-14 21:05:36.874062] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.349 [2024-07-14 21:05:36.874105] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:25.349 [2024-07-14 21:05:36.874116] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:25.349 [2024-07-14 21:05:36.874124] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:25.607 00:06:25.607 real 0m0.711s 00:06:25.607 user 0m0.173s 00:06:25.607 sys 0m0.536s 00:06:25.607 21:05:36 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:06:25.607 21:05:36 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.607 ************************************ 00:06:25.607 END TEST bdev_json_nonenclosed 00:06:25.607 ************************************ 00:06:25.607 21:05:36 blockdev_general.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:25.607 21:05:37 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:06:25.607 21:05:37 blockdev_general -- bdev/blockdev.sh@782 -- # true 00:06:25.607 21:05:37 blockdev_general -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:25.607 21:05:37 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:25.607 21:05:37 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.607 21:05:37 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:25.607 ************************************ 00:06:25.607 START TEST bdev_json_nonarray 00:06:25.607 ************************************ 00:06:25.607 21:05:37 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:25.607 [2024-07-14 21:05:37.041094] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:25.607 [2024-07-14 21:05:37.041292] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:26.175 EAL: TSC is not safe to use in SMP mode 00:06:26.175 EAL: TSC is not invariant 00:06:26.175 [2024-07-14 21:05:37.568824] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:26.175 [2024-07-14 21:05:37.653453] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:26.175 [2024-07-14 21:05:37.655706] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.175 [2024-07-14 21:05:37.655775] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:26.175 [2024-07-14 21:05:37.655786] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:26.175 [2024-07-14 21:05:37.655793] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:26.433 00:06:26.433 real 0m0.742s 00:06:26.433 user 0m0.171s 00:06:26.433 sys 0m0.568s 00:06:26.433 21:05:37 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:06:26.433 21:05:37 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.433 21:05:37 blockdev_general.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:26.433 ************************************ 00:06:26.433 END TEST bdev_json_nonarray 00:06:26.433 ************************************ 00:06:26.433 21:05:37 blockdev_general -- common/autotest_common.sh@1142 -- # return 234 00:06:26.433 21:05:37 blockdev_general -- bdev/blockdev.sh@785 -- # true 00:06:26.433 21:05:37 blockdev_general -- bdev/blockdev.sh@787 -- # [[ bdev == bdev ]] 00:06:26.433 21:05:37 blockdev_general -- bdev/blockdev.sh@788 -- # run_test bdev_qos qos_test_suite '' 00:06:26.433 21:05:37 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:26.433 21:05:37 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.433 21:05:37 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:26.433 ************************************ 00:06:26.433 START TEST bdev_qos 00:06:26.433 ************************************ 00:06:26.433 21:05:37 blockdev_general.bdev_qos -- common/autotest_common.sh@1123 -- # qos_test_suite '' 00:06:26.433 21:05:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@446 -- # QOS_PID=48137 00:06:26.433 Process qos testing pid: 48137 00:06:26.433 21:05:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@447 -- # echo 'Process qos testing pid: 48137' 00:06:26.433 21:05:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@448 -- # trap 'cleanup; killprocess $QOS_PID; exit 1' SIGINT SIGTERM EXIT 00:06:26.433 21:05:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@445 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 256 -o 4096 -w randread -t 60 '' 00:06:26.433 21:05:37 blockdev_general.bdev_qos -- bdev/blockdev.sh@449 -- # waitforlisten 48137 00:06:26.433 21:05:37 blockdev_general.bdev_qos -- common/autotest_common.sh@829 -- # '[' -z 48137 ']' 00:06:26.433 21:05:37 blockdev_general.bdev_qos -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:26.433 21:05:37 blockdev_general.bdev_qos -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:26.433 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:26.433 21:05:37 blockdev_general.bdev_qos -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:26.433 21:05:37 blockdev_general.bdev_qos -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:26.433 21:05:37 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:26.433 [2024-07-14 21:05:37.835995] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:26.433 [2024-07-14 21:05:37.836165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:27.001 EAL: TSC is not safe to use in SMP mode 00:06:27.001 EAL: TSC is not invariant 00:06:27.001 [2024-07-14 21:05:38.338176] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:27.001 [2024-07-14 21:05:38.412855] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:27.001 [2024-07-14 21:05:38.415206] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@862 -- # return 0 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@451 -- # rpc_cmd bdev_malloc_create -b Malloc_0 128 512 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:27.597 Malloc_0 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@452 -- # waitforbdev Malloc_0 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_0 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_0 -t 2000 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:27.597 [ 00:06:27.597 { 00:06:27.597 "name": "Malloc_0", 00:06:27.597 "aliases": [ 00:06:27.597 "d26edb69-4224-11ef-aa83-81fbc7dfef58" 00:06:27.597 ], 00:06:27.597 "product_name": "Malloc disk", 00:06:27.597 "block_size": 512, 00:06:27.597 "num_blocks": 262144, 00:06:27.597 "uuid": "d26edb69-4224-11ef-aa83-81fbc7dfef58", 00:06:27.597 "assigned_rate_limits": { 00:06:27.597 "rw_ios_per_sec": 0, 00:06:27.597 "rw_mbytes_per_sec": 0, 00:06:27.597 "r_mbytes_per_sec": 0, 00:06:27.597 "w_mbytes_per_sec": 0 00:06:27.597 }, 00:06:27.597 "claimed": false, 00:06:27.597 "zoned": false, 00:06:27.597 "supported_io_types": { 00:06:27.597 "read": true, 00:06:27.597 "write": true, 00:06:27.597 "unmap": true, 00:06:27.597 "flush": true, 00:06:27.597 "reset": true, 00:06:27.597 "nvme_admin": false, 00:06:27.597 "nvme_io": false, 00:06:27.597 "nvme_io_md": false, 00:06:27.597 "write_zeroes": true, 00:06:27.597 "zcopy": true, 00:06:27.597 "get_zone_info": false, 00:06:27.597 "zone_management": false, 00:06:27.597 "zone_append": false, 00:06:27.597 "compare": false, 00:06:27.597 "compare_and_write": false, 00:06:27.597 "abort": true, 00:06:27.597 "seek_hole": false, 00:06:27.597 "seek_data": false, 00:06:27.597 "copy": true, 00:06:27.597 "nvme_iov_md": false 00:06:27.597 }, 00:06:27.597 "memory_domains": [ 00:06:27.597 { 00:06:27.597 "dma_device_id": "system", 00:06:27.597 "dma_device_type": 1 00:06:27.597 }, 00:06:27.597 { 00:06:27.597 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:27.597 "dma_device_type": 2 00:06:27.597 } 00:06:27.597 ], 00:06:27.597 "driver_specific": {} 00:06:27.597 } 00:06:27.597 ] 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@453 -- # rpc_cmd bdev_null_create Null_1 128 512 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:27.597 Null_1 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@454 -- # waitforbdev Null_1 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@897 -- # local bdev_name=Null_1 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@899 -- # local i 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Null_1 -t 2000 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:27.597 [ 00:06:27.597 { 00:06:27.597 "name": "Null_1", 00:06:27.597 "aliases": [ 00:06:27.597 "d273bca3-4224-11ef-aa83-81fbc7dfef58" 00:06:27.597 ], 00:06:27.597 "product_name": "Null disk", 00:06:27.597 "block_size": 512, 00:06:27.597 "num_blocks": 262144, 00:06:27.597 "uuid": "d273bca3-4224-11ef-aa83-81fbc7dfef58", 00:06:27.597 "assigned_rate_limits": { 00:06:27.597 "rw_ios_per_sec": 0, 00:06:27.597 "rw_mbytes_per_sec": 0, 00:06:27.597 "r_mbytes_per_sec": 0, 00:06:27.597 "w_mbytes_per_sec": 0 00:06:27.597 }, 00:06:27.597 "claimed": false, 00:06:27.597 "zoned": false, 00:06:27.597 "supported_io_types": { 00:06:27.597 "read": true, 00:06:27.597 "write": true, 00:06:27.597 "unmap": false, 00:06:27.597 "flush": false, 00:06:27.597 "reset": true, 00:06:27.597 "nvme_admin": false, 00:06:27.597 "nvme_io": false, 00:06:27.597 "nvme_io_md": false, 00:06:27.597 "write_zeroes": true, 00:06:27.597 "zcopy": false, 00:06:27.597 "get_zone_info": false, 00:06:27.597 "zone_management": false, 00:06:27.597 "zone_append": false, 00:06:27.597 "compare": false, 00:06:27.597 "compare_and_write": false, 00:06:27.597 "abort": true, 00:06:27.597 "seek_hole": false, 00:06:27.597 "seek_data": false, 00:06:27.597 "copy": false, 00:06:27.597 "nvme_iov_md": false 00:06:27.597 }, 00:06:27.597 "driver_specific": {} 00:06:27.597 } 00:06:27.597 ] 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- common/autotest_common.sh@905 -- # return 0 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@457 -- # qos_function_test 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@456 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@410 -- # local qos_lower_iops_limit=1000 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@411 -- # local qos_lower_bw_limit=2 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@412 -- # local io_result=0 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@413 -- # local iops_limit=0 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@414 -- # local bw_limit=0 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # get_io_result IOPS Malloc_0 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:06:27.597 21:05:38 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:06:27.597 Running I/O for 60 seconds... 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 707723.18 2830892.74 0.00 0.00 3035136.00 0.00 0.00 ' 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@380 -- # iostat_result=707723.18 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 707723 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@416 -- # io_result=707723 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@418 -- # iops_limit=176000 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@419 -- # '[' 176000 -gt 1000 ']' 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@422 -- # rpc_cmd bdev_set_qos_limit --rw_ios_per_sec 176000 Malloc_0 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- bdev/blockdev.sh@423 -- # run_test bdev_qos_iops run_qos_test 176000 IOPS Malloc_0 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.907 21:05:44 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:33.165 ************************************ 00:06:33.165 START TEST bdev_qos_iops 00:06:33.165 ************************************ 00:06:33.165 21:05:44 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1123 -- # run_qos_test 176000 IOPS Malloc_0 00:06:33.165 21:05:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@389 -- # local qos_limit=176000 00:06:33.165 21:05:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:33.165 21:05:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # get_io_result IOPS Malloc_0 00:06:33.165 21:05:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@375 -- # local limit_type=IOPS 00:06:33.165 21:05:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:06:33.165 21:05:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:33.165 21:05:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:33.165 21:05:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:06:33.165 21:05:44 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # tail -1 00:06:39.730 21:05:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 176033.59 704134.36 0.00 0.00 753984.00 0.00 0.00 ' 00:06:39.730 21:05:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@379 -- # '[' IOPS = IOPS ']' 00:06:39.730 21:05:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # awk '{print $2}' 00:06:39.730 21:05:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@380 -- # iostat_result=176033.59 00:06:39.730 21:05:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@385 -- # echo 176033 00:06:39.730 21:05:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@392 -- # qos_result=176033 00:06:39.730 21:05:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@393 -- # '[' IOPS = BANDWIDTH ']' 00:06:39.730 21:05:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@396 -- # lower_limit=158400 00:06:39.730 21:05:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@397 -- # upper_limit=193600 00:06:39.730 21:05:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 176033 -lt 158400 ']' 00:06:39.730 21:05:49 blockdev_general.bdev_qos.bdev_qos_iops -- bdev/blockdev.sh@400 -- # '[' 176033 -gt 193600 ']' 00:06:39.730 00:06:39.730 real 0m5.528s 00:06:39.730 user 0m0.146s 00:06:39.730 sys 0m0.018s 00:06:39.730 21:05:49 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:39.730 21:05:49 blockdev_general.bdev_qos.bdev_qos_iops -- common/autotest_common.sh@10 -- # set +x 00:06:39.730 ************************************ 00:06:39.730 END TEST bdev_qos_iops 00:06:39.730 ************************************ 00:06:39.730 21:05:50 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:06:39.730 21:05:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # get_io_result BANDWIDTH Null_1 00:06:39.730 21:05:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:39.730 21:05:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:06:39.730 21:05:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:39.730 21:05:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:39.730 21:05:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # grep Null_1 00:06:39.730 21:05:50 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # tail -1 00:06:43.923 21:05:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 470631.61 1882526.44 0.00 0.00 1980416.00 0.00 0.00 ' 00:06:43.923 21:05:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:43.923 21:05:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:43.923 21:05:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:43.923 21:05:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@382 -- # iostat_result=1980416.00 00:06:43.924 21:05:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@385 -- # echo 1980416 00:06:43.924 21:05:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@427 -- # bw_limit=1980416 00:06:43.924 21:05:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@428 -- # bw_limit=193 00:06:43.924 21:05:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@429 -- # '[' 193 -lt 2 ']' 00:06:43.924 21:05:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@432 -- # rpc_cmd bdev_set_qos_limit --rw_mbytes_per_sec 193 Null_1 00:06:43.924 21:05:55 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.924 21:05:55 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:43.924 21:05:55 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.924 21:05:55 blockdev_general.bdev_qos -- bdev/blockdev.sh@433 -- # run_test bdev_qos_bw run_qos_test 193 BANDWIDTH Null_1 00:06:43.924 21:05:55 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:43.924 21:05:55 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:43.924 21:05:55 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:43.924 ************************************ 00:06:43.924 START TEST bdev_qos_bw 00:06:43.924 ************************************ 00:06:43.924 21:05:55 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1123 -- # run_qos_test 193 BANDWIDTH Null_1 00:06:43.924 21:05:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@389 -- # local qos_limit=193 00:06:43.924 21:05:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:43.924 21:05:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Null_1 00:06:43.924 21:05:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:43.924 21:05:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Null_1 00:06:43.924 21:05:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:43.924 21:05:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:43.924 21:05:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # grep Null_1 00:06:43.924 21:05:55 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # tail -1 00:06:50.481 21:06:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@378 -- # iostat_result='Null_1 49411.57 197646.29 0.00 0.00 212260.00 0.00 0.00 ' 00:06:50.481 21:06:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:50.481 21:06:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:50.481 21:06:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:50.481 21:06:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@382 -- # iostat_result=212260.00 00:06:50.481 21:06:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@385 -- # echo 212260 00:06:50.481 21:06:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@392 -- # qos_result=212260 00:06:50.481 21:06:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:50.481 21:06:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@394 -- # qos_limit=197632 00:06:50.481 21:06:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@396 -- # lower_limit=177868 00:06:50.481 21:06:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@397 -- # upper_limit=217395 00:06:50.481 21:06:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 212260 -lt 177868 ']' 00:06:50.481 21:06:00 blockdev_general.bdev_qos.bdev_qos_bw -- bdev/blockdev.sh@400 -- # '[' 212260 -gt 217395 ']' 00:06:50.481 00:06:50.481 real 0m5.540s 00:06:50.481 user 0m0.197s 00:06:50.481 sys 0m0.018s 00:06:50.481 21:06:00 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:50.481 ************************************ 00:06:50.481 END TEST bdev_qos_bw 00:06:50.481 21:06:00 blockdev_general.bdev_qos.bdev_qos_bw -- common/autotest_common.sh@10 -- # set +x 00:06:50.481 ************************************ 00:06:50.481 21:06:01 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:06:50.481 21:06:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@436 -- # rpc_cmd bdev_set_qos_limit --r_mbytes_per_sec 2 Malloc_0 00:06:50.481 21:06:01 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:50.481 21:06:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:50.481 21:06:01 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:50.481 21:06:01 blockdev_general.bdev_qos -- bdev/blockdev.sh@437 -- # run_test bdev_qos_ro_bw run_qos_test 2 BANDWIDTH Malloc_0 00:06:50.481 21:06:01 blockdev_general.bdev_qos -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:06:50.481 21:06:01 blockdev_general.bdev_qos -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:50.481 21:06:01 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:50.481 ************************************ 00:06:50.481 START TEST bdev_qos_ro_bw 00:06:50.481 ************************************ 00:06:50.481 21:06:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1123 -- # run_qos_test 2 BANDWIDTH Malloc_0 00:06:50.481 21:06:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@389 -- # local qos_limit=2 00:06:50.481 21:06:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@390 -- # local qos_result=0 00:06:50.481 21:06:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # get_io_result BANDWIDTH Malloc_0 00:06:50.481 21:06:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@375 -- # local limit_type=BANDWIDTH 00:06:50.481 21:06:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@376 -- # local qos_dev=Malloc_0 00:06:50.481 21:06:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@377 -- # local iostat_result 00:06:50.481 21:06:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # /home/vagrant/spdk_repo/spdk/scripts/iostat.py -d -i 1 -t 5 00:06:50.481 21:06:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # tail -1 00:06:50.481 21:06:01 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # grep Malloc_0 00:06:55.756 21:06:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@378 -- # iostat_result='Malloc_0 511.27 2045.09 0.00 0.00 2152.00 0.00 0.00 ' 00:06:55.756 21:06:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@379 -- # '[' BANDWIDTH = IOPS ']' 00:06:55.756 21:06:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@381 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:55.756 21:06:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # awk '{print $6}' 00:06:55.756 21:06:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@382 -- # iostat_result=2152.00 00:06:55.756 21:06:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@385 -- # echo 2152 00:06:55.756 21:06:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@392 -- # qos_result=2152 00:06:55.756 21:06:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@393 -- # '[' BANDWIDTH = BANDWIDTH ']' 00:06:55.756 21:06:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@394 -- # qos_limit=2048 00:06:55.756 21:06:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@396 -- # lower_limit=1843 00:06:55.756 21:06:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@397 -- # upper_limit=2252 00:06:55.756 21:06:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2152 -lt 1843 ']' 00:06:55.756 21:06:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- bdev/blockdev.sh@400 -- # '[' 2152 -gt 2252 ']' 00:06:55.756 00:06:55.756 real 0m5.385s 00:06:55.756 user 0m0.124s 00:06:55.756 sys 0m0.042s 00:06:55.756 21:06:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.756 ************************************ 00:06:55.756 END TEST bdev_qos_ro_bw 00:06:55.756 ************************************ 00:06:55.756 21:06:06 blockdev_general.bdev_qos.bdev_qos_ro_bw -- common/autotest_common.sh@10 -- # set +x 00:06:55.756 21:06:06 blockdev_general.bdev_qos -- common/autotest_common.sh@1142 -- # return 0 00:06:55.756 21:06:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@459 -- # rpc_cmd bdev_malloc_delete Malloc_0 00:06:55.756 21:06:06 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.756 21:06:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:55.756 21:06:06 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.756 21:06:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@460 -- # rpc_cmd bdev_null_delete Null_1 00:06:55.756 21:06:06 blockdev_general.bdev_qos -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:55.756 21:06:06 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:55.756 00:06:55.756 Latency(us) 00:06:55.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:55.756 Job: Malloc_0 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:55.756 Malloc_0 : 27.93 240607.30 939.87 0.00 0.00 1054.69 310.92 503316.63 00:06:55.756 Job: Null_1 (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:55.756 Null_1 : 27.96 338847.85 1323.62 0.00 0.00 755.19 61.91 29074.16 00:06:55.756 =================================================================================================================== 00:06:55.756 Total : 579455.14 2263.50 0.00 0.00 879.47 61.91 503316.63 00:06:55.756 0 00:06:55.756 21:06:06 blockdev_general.bdev_qos -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:55.756 21:06:06 blockdev_general.bdev_qos -- bdev/blockdev.sh@461 -- # killprocess 48137 00:06:55.756 21:06:06 blockdev_general.bdev_qos -- common/autotest_common.sh@948 -- # '[' -z 48137 ']' 00:06:55.756 21:06:06 blockdev_general.bdev_qos -- common/autotest_common.sh@952 -- # kill -0 48137 00:06:55.756 21:06:06 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # uname 00:06:55.756 21:06:06 blockdev_general.bdev_qos -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:55.756 21:06:07 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # ps -c -o command 48137 00:06:55.756 21:06:07 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # tail -1 00:06:55.756 21:06:07 blockdev_general.bdev_qos -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:06:55.756 21:06:07 blockdev_general.bdev_qos -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:06:55.756 killing process with pid 48137 00:06:55.756 21:06:07 blockdev_general.bdev_qos -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48137' 00:06:55.756 21:06:07 blockdev_general.bdev_qos -- common/autotest_common.sh@967 -- # kill 48137 00:06:55.756 Received shutdown signal, test time was about 27.974809 seconds 00:06:55.756 00:06:55.756 Latency(us) 00:06:55.756 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:55.756 =================================================================================================================== 00:06:55.756 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:55.756 21:06:07 blockdev_general.bdev_qos -- common/autotest_common.sh@972 -- # wait 48137 00:06:55.756 21:06:07 blockdev_general.bdev_qos -- bdev/blockdev.sh@462 -- # trap - SIGINT SIGTERM EXIT 00:06:55.756 00:06:55.756 real 0m29.361s 00:06:55.756 user 0m30.129s 00:06:55.756 sys 0m0.839s 00:06:55.756 21:06:07 blockdev_general.bdev_qos -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:55.756 21:06:07 blockdev_general.bdev_qos -- common/autotest_common.sh@10 -- # set +x 00:06:55.756 ************************************ 00:06:55.756 END TEST bdev_qos 00:06:55.756 ************************************ 00:06:55.756 21:06:07 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:55.756 21:06:07 blockdev_general -- bdev/blockdev.sh@789 -- # run_test bdev_qd_sampling qd_sampling_test_suite '' 00:06:55.756 21:06:07 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:55.756 21:06:07 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:55.756 21:06:07 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:55.756 ************************************ 00:06:55.756 START TEST bdev_qd_sampling 00:06:55.756 ************************************ 00:06:55.757 21:06:07 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1123 -- # qd_sampling_test_suite '' 00:06:55.757 21:06:07 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@538 -- # QD_DEV=Malloc_QD 00:06:55.757 21:06:07 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@541 -- # QD_PID=48358 00:06:55.757 Process bdev QD sampling period testing pid: 48358 00:06:55.757 21:06:07 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@542 -- # echo 'Process bdev QD sampling period testing pid: 48358' 00:06:55.757 21:06:07 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@543 -- # trap 'cleanup; killprocess $QD_PID; exit 1' SIGINT SIGTERM EXIT 00:06:55.757 21:06:07 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@540 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 5 -C '' 00:06:55.757 21:06:07 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@544 -- # waitforlisten 48358 00:06:55.757 21:06:07 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@829 -- # '[' -z 48358 ']' 00:06:55.757 21:06:07 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:55.757 21:06:07 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:55.757 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:55.757 21:06:07 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:55.757 21:06:07 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:55.757 21:06:07 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:55.757 [2024-07-14 21:06:07.252983] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:55.757 [2024-07-14 21:06:07.253223] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:56.324 EAL: TSC is not safe to use in SMP mode 00:06:56.324 EAL: TSC is not invariant 00:06:56.324 [2024-07-14 21:06:07.794532] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:56.582 [2024-07-14 21:06:07.898880] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:06:56.582 [2024-07-14 21:06:07.898949] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:06:56.582 [2024-07-14 21:06:07.902394] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:56.582 [2024-07-14 21:06:07.902384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@862 -- # return 0 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@546 -- # rpc_cmd bdev_malloc_create -b Malloc_QD 128 512 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:56.840 Malloc_QD 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@547 -- # waitforbdev Malloc_QD 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_QD 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@899 -- # local i 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_QD -t 2000 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:56.840 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:56.840 [ 00:06:56.840 { 00:06:56.840 "name": "Malloc_QD", 00:06:56.840 "aliases": [ 00:06:56.840 "e3fe74a8-4224-11ef-aa83-81fbc7dfef58" 00:06:56.840 ], 00:06:56.840 "product_name": "Malloc disk", 00:06:56.840 "block_size": 512, 00:06:56.840 "num_blocks": 262144, 00:06:56.841 "uuid": "e3fe74a8-4224-11ef-aa83-81fbc7dfef58", 00:06:56.841 "assigned_rate_limits": { 00:06:56.841 "rw_ios_per_sec": 0, 00:06:56.841 "rw_mbytes_per_sec": 0, 00:06:56.841 "r_mbytes_per_sec": 0, 00:06:56.841 "w_mbytes_per_sec": 0 00:06:56.841 }, 00:06:56.841 "claimed": false, 00:06:56.841 "zoned": false, 00:06:56.841 "supported_io_types": { 00:06:56.841 "read": true, 00:06:56.841 "write": true, 00:06:56.841 "unmap": true, 00:06:56.841 "flush": true, 00:06:56.841 "reset": true, 00:06:56.841 "nvme_admin": false, 00:06:56.841 "nvme_io": false, 00:06:56.841 "nvme_io_md": false, 00:06:56.841 "write_zeroes": true, 00:06:56.841 "zcopy": true, 00:06:56.841 "get_zone_info": false, 00:06:56.841 "zone_management": false, 00:06:56.841 "zone_append": false, 00:06:56.841 "compare": false, 00:06:56.841 "compare_and_write": false, 00:06:56.841 "abort": true, 00:06:56.841 "seek_hole": false, 00:06:56.841 "seek_data": false, 00:06:56.841 "copy": true, 00:06:56.841 "nvme_iov_md": false 00:06:56.841 }, 00:06:56.841 "memory_domains": [ 00:06:56.841 { 00:06:56.841 "dma_device_id": "system", 00:06:56.841 "dma_device_type": 1 00:06:56.841 }, 00:06:56.841 { 00:06:56.841 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:06:56.841 "dma_device_type": 2 00:06:56.841 } 00:06:56.841 ], 00:06:56.841 "driver_specific": {} 00:06:56.841 } 00:06:56.841 ] 00:06:56.841 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:56.841 21:06:08 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@905 -- # return 0 00:06:56.841 21:06:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@550 -- # sleep 2 00:06:56.841 21:06:08 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@549 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:06:57.099 Running I/O for 5 seconds... 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@551 -- # qd_sampling_function_test Malloc_QD 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@519 -- # local bdev_name=Malloc_QD 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@520 -- # local sampling_period=10 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@521 -- # local iostats 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@523 -- # rpc_cmd bdev_set_qd_sampling_period Malloc_QD 10 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # rpc_cmd bdev_get_iostat -b Malloc_QD 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@525 -- # iostats='{ 00:06:59.001 "tick_rate": 2199999327, 00:06:59.001 "ticks": 705885682595, 00:06:59.001 "bdevs": [ 00:06:59.001 { 00:06:59.001 "name": "Malloc_QD", 00:06:59.001 "bytes_read": 12810490368, 00:06:59.001 "num_read_ops": 3127555, 00:06:59.001 "bytes_written": 0, 00:06:59.001 "num_write_ops": 0, 00:06:59.001 "bytes_unmapped": 0, 00:06:59.001 "num_unmap_ops": 0, 00:06:59.001 "bytes_copied": 0, 00:06:59.001 "num_copy_ops": 0, 00:06:59.001 "read_latency_ticks": 2211082975128, 00:06:59.001 "max_read_latency_ticks": 1303474, 00:06:59.001 "min_read_latency_ticks": 44256, 00:06:59.001 "write_latency_ticks": 0, 00:06:59.001 "max_write_latency_ticks": 0, 00:06:59.001 "min_write_latency_ticks": 0, 00:06:59.001 "unmap_latency_ticks": 0, 00:06:59.001 "max_unmap_latency_ticks": 0, 00:06:59.001 "min_unmap_latency_ticks": 0, 00:06:59.001 "copy_latency_ticks": 0, 00:06:59.001 "max_copy_latency_ticks": 0, 00:06:59.001 "min_copy_latency_ticks": 0, 00:06:59.001 "io_error": {}, 00:06:59.001 "queue_depth_polling_period": 10, 00:06:59.001 "queue_depth": 512, 00:06:59.001 "io_time": 370, 00:06:59.001 "weighted_io_time": 189440 00:06:59.001 } 00:06:59.001 ] 00:06:59.001 }' 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # jq -r '.bdevs[0].queue_depth_polling_period' 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@527 -- # qd_sampling_period=10 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 == null ']' 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@529 -- # '[' 10 -ne 10 ']' 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@553 -- # rpc_cmd bdev_malloc_delete Malloc_QD 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:59.001 00:06:59.001 Latency(us) 00:06:59.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.001 Job: Malloc_QD (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:06:59.001 Malloc_QD : 1.99 790034.50 3086.07 0.00 0.00 323.78 56.55 595.78 00:06:59.001 Job: Malloc_QD (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:06:59.001 Malloc_QD : 1.99 801604.33 3131.27 0.00 0.00 319.10 56.55 592.06 00:06:59.001 =================================================================================================================== 00:06:59.001 Total : 1591638.83 6217.34 0.00 0.00 321.42 56.55 595.78 00:06:59.001 0 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@554 -- # killprocess 48358 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@948 -- # '[' -z 48358 ']' 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@952 -- # kill -0 48358 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # uname 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # ps -c -o command 48358 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # tail -1 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:06:59.001 killing process with pid 48358 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48358' 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@967 -- # kill 48358 00:06:59.001 Received shutdown signal, test time was about 2.027059 seconds 00:06:59.001 00:06:59.001 Latency(us) 00:06:59.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:59.001 =================================================================================================================== 00:06:59.001 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:59.001 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@972 -- # wait 48358 00:06:59.260 21:06:10 blockdev_general.bdev_qd_sampling -- bdev/blockdev.sh@555 -- # trap - SIGINT SIGTERM EXIT 00:06:59.260 00:06:59.260 real 0m3.394s 00:06:59.260 user 0m6.036s 00:06:59.260 sys 0m0.666s 00:06:59.260 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:59.260 21:06:10 blockdev_general.bdev_qd_sampling -- common/autotest_common.sh@10 -- # set +x 00:06:59.260 ************************************ 00:06:59.260 END TEST bdev_qd_sampling 00:06:59.260 ************************************ 00:06:59.260 21:06:10 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:06:59.260 21:06:10 blockdev_general -- bdev/blockdev.sh@790 -- # run_test bdev_error error_test_suite '' 00:06:59.260 21:06:10 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:59.260 21:06:10 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:59.260 21:06:10 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:06:59.260 ************************************ 00:06:59.260 START TEST bdev_error 00:06:59.260 ************************************ 00:06:59.260 21:06:10 blockdev_general.bdev_error -- common/autotest_common.sh@1123 -- # error_test_suite '' 00:06:59.260 21:06:10 blockdev_general.bdev_error -- bdev/blockdev.sh@466 -- # DEV_1=Dev_1 00:06:59.260 21:06:10 blockdev_general.bdev_error -- bdev/blockdev.sh@467 -- # DEV_2=Dev_2 00:06:59.260 21:06:10 blockdev_general.bdev_error -- bdev/blockdev.sh@468 -- # ERR_DEV=EE_Dev_1 00:06:59.260 21:06:10 blockdev_general.bdev_error -- bdev/blockdev.sh@472 -- # ERR_PID=48405 00:06:59.260 Process error testing pid: 48405 00:06:59.260 21:06:10 blockdev_general.bdev_error -- bdev/blockdev.sh@473 -- # echo 'Process error testing pid: 48405' 00:06:59.260 21:06:10 blockdev_general.bdev_error -- bdev/blockdev.sh@474 -- # waitforlisten 48405 00:06:59.260 21:06:10 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 48405 ']' 00:06:59.260 21:06:10 blockdev_general.bdev_error -- bdev/blockdev.sh@471 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 -f '' 00:06:59.260 21:06:10 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:59.260 21:06:10 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:59.260 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:59.260 21:06:10 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:59.260 21:06:10 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:59.260 21:06:10 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:06:59.260 [2024-07-14 21:06:10.696006] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:06:59.260 [2024-07-14 21:06:10.696290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:06:59.831 EAL: TSC is not safe to use in SMP mode 00:06:59.831 EAL: TSC is not invariant 00:07:00.096 [2024-07-14 21:06:11.382554] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:00.096 [2024-07-14 21:06:11.465623] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:00.096 [2024-07-14 21:06:11.467829] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:07:00.354 21:06:11 blockdev_general.bdev_error -- bdev/blockdev.sh@476 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:00.354 Dev_1 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.354 21:06:11 blockdev_general.bdev_error -- bdev/blockdev.sh@477 -- # waitforbdev Dev_1 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:00.354 [ 00:07:00.354 { 00:07:00.354 "name": "Dev_1", 00:07:00.354 "aliases": [ 00:07:00.354 "e600fb24-4224-11ef-aa83-81fbc7dfef58" 00:07:00.354 ], 00:07:00.354 "product_name": "Malloc disk", 00:07:00.354 "block_size": 512, 00:07:00.354 "num_blocks": 262144, 00:07:00.354 "uuid": "e600fb24-4224-11ef-aa83-81fbc7dfef58", 00:07:00.354 "assigned_rate_limits": { 00:07:00.354 "rw_ios_per_sec": 0, 00:07:00.354 "rw_mbytes_per_sec": 0, 00:07:00.354 "r_mbytes_per_sec": 0, 00:07:00.354 "w_mbytes_per_sec": 0 00:07:00.354 }, 00:07:00.354 "claimed": false, 00:07:00.354 "zoned": false, 00:07:00.354 "supported_io_types": { 00:07:00.354 "read": true, 00:07:00.354 "write": true, 00:07:00.354 "unmap": true, 00:07:00.354 "flush": true, 00:07:00.354 "reset": true, 00:07:00.354 "nvme_admin": false, 00:07:00.354 "nvme_io": false, 00:07:00.354 "nvme_io_md": false, 00:07:00.354 "write_zeroes": true, 00:07:00.354 "zcopy": true, 00:07:00.354 "get_zone_info": false, 00:07:00.354 "zone_management": false, 00:07:00.354 "zone_append": false, 00:07:00.354 "compare": false, 00:07:00.354 "compare_and_write": false, 00:07:00.354 "abort": true, 00:07:00.354 "seek_hole": false, 00:07:00.354 "seek_data": false, 00:07:00.354 "copy": true, 00:07:00.354 "nvme_iov_md": false 00:07:00.354 }, 00:07:00.354 "memory_domains": [ 00:07:00.354 { 00:07:00.354 "dma_device_id": "system", 00:07:00.354 "dma_device_type": 1 00:07:00.354 }, 00:07:00.354 { 00:07:00.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.354 "dma_device_type": 2 00:07:00.354 } 00:07:00.354 ], 00:07:00.354 "driver_specific": {} 00:07:00.354 } 00:07:00.354 ] 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:07:00.354 21:06:11 blockdev_general.bdev_error -- bdev/blockdev.sh@478 -- # rpc_cmd bdev_error_create Dev_1 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:00.354 true 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.354 21:06:11 blockdev_general.bdev_error -- bdev/blockdev.sh@479 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:00.354 Dev_2 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.354 21:06:11 blockdev_general.bdev_error -- bdev/blockdev.sh@480 -- # waitforbdev Dev_2 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:00.354 [ 00:07:00.354 { 00:07:00.354 "name": "Dev_2", 00:07:00.354 "aliases": [ 00:07:00.354 "e607148e-4224-11ef-aa83-81fbc7dfef58" 00:07:00.354 ], 00:07:00.354 "product_name": "Malloc disk", 00:07:00.354 "block_size": 512, 00:07:00.354 "num_blocks": 262144, 00:07:00.354 "uuid": "e607148e-4224-11ef-aa83-81fbc7dfef58", 00:07:00.354 "assigned_rate_limits": { 00:07:00.354 "rw_ios_per_sec": 0, 00:07:00.354 "rw_mbytes_per_sec": 0, 00:07:00.354 "r_mbytes_per_sec": 0, 00:07:00.354 "w_mbytes_per_sec": 0 00:07:00.354 }, 00:07:00.354 "claimed": false, 00:07:00.354 "zoned": false, 00:07:00.354 "supported_io_types": { 00:07:00.354 "read": true, 00:07:00.354 "write": true, 00:07:00.354 "unmap": true, 00:07:00.354 "flush": true, 00:07:00.354 "reset": true, 00:07:00.354 "nvme_admin": false, 00:07:00.354 "nvme_io": false, 00:07:00.354 "nvme_io_md": false, 00:07:00.354 "write_zeroes": true, 00:07:00.354 "zcopy": true, 00:07:00.354 "get_zone_info": false, 00:07:00.354 "zone_management": false, 00:07:00.354 "zone_append": false, 00:07:00.354 "compare": false, 00:07:00.354 "compare_and_write": false, 00:07:00.354 "abort": true, 00:07:00.354 "seek_hole": false, 00:07:00.354 "seek_data": false, 00:07:00.354 "copy": true, 00:07:00.354 "nvme_iov_md": false 00:07:00.354 }, 00:07:00.354 "memory_domains": [ 00:07:00.354 { 00:07:00.354 "dma_device_id": "system", 00:07:00.354 "dma_device_type": 1 00:07:00.354 }, 00:07:00.354 { 00:07:00.354 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:00.354 "dma_device_type": 2 00:07:00.354 } 00:07:00.354 ], 00:07:00.354 "driver_specific": {} 00:07:00.354 } 00:07:00.354 ] 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:07:00.354 21:06:11 blockdev_general.bdev_error -- bdev/blockdev.sh@481 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:00.354 21:06:11 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:00.354 21:06:11 blockdev_general.bdev_error -- bdev/blockdev.sh@484 -- # sleep 1 00:07:00.354 21:06:11 blockdev_general.bdev_error -- bdev/blockdev.sh@483 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:07:00.354 Running I/O for 5 seconds... 00:07:01.288 21:06:12 blockdev_general.bdev_error -- bdev/blockdev.sh@487 -- # kill -0 48405 00:07:01.288 Process is existed as continue on error is set. Pid: 48405 00:07:01.288 21:06:12 blockdev_general.bdev_error -- bdev/blockdev.sh@488 -- # echo 'Process is existed as continue on error is set. Pid: 48405' 00:07:01.288 21:06:12 blockdev_general.bdev_error -- bdev/blockdev.sh@495 -- # rpc_cmd bdev_error_delete EE_Dev_1 00:07:01.288 21:06:12 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.288 21:06:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:01.288 21:06:12 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.288 21:06:12 blockdev_general.bdev_error -- bdev/blockdev.sh@496 -- # rpc_cmd bdev_malloc_delete Dev_1 00:07:01.288 21:06:12 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:01.289 21:06:12 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:01.289 21:06:12 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:01.289 21:06:12 blockdev_general.bdev_error -- bdev/blockdev.sh@497 -- # sleep 5 00:07:01.547 Timeout while waiting for response: 00:07:01.547 00:07:01.547 00:07:05.728 00:07:05.728 Latency(us) 00:07:05.728 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.728 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:07:05.728 EE_Dev_1 : 0.92 352146.36 1375.57 5.41 0.00 45.24 21.41 137.77 00:07:05.728 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:07:05.728 Dev_2 : 5.00 839219.25 3278.20 0.00 0.00 18.88 4.60 22282.25 00:07:05.728 =================================================================================================================== 00:07:05.728 Total : 1191365.61 4653.77 5.41 0.00 20.78 4.60 22282.25 00:07:06.661 21:06:17 blockdev_general.bdev_error -- bdev/blockdev.sh@499 -- # killprocess 48405 00:07:06.661 21:06:17 blockdev_general.bdev_error -- common/autotest_common.sh@948 -- # '[' -z 48405 ']' 00:07:06.661 21:06:17 blockdev_general.bdev_error -- common/autotest_common.sh@952 -- # kill -0 48405 00:07:06.661 21:06:17 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # uname 00:07:06.661 21:06:17 blockdev_general.bdev_error -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:06.661 21:06:17 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # ps -c -o command 48405 00:07:06.661 21:06:17 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # tail -1 00:07:06.661 21:06:17 blockdev_general.bdev_error -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:06.661 killing process with pid 48405 00:07:06.661 21:06:17 blockdev_general.bdev_error -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:06.661 21:06:17 blockdev_general.bdev_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48405' 00:07:06.661 Received shutdown signal, test time was about 5.000000 seconds 00:07:06.661 00:07:06.661 Latency(us) 00:07:06.661 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:06.661 =================================================================================================================== 00:07:06.661 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:06.661 21:06:17 blockdev_general.bdev_error -- common/autotest_common.sh@967 -- # kill 48405 00:07:06.661 21:06:17 blockdev_general.bdev_error -- common/autotest_common.sh@972 -- # wait 48405 00:07:06.661 21:06:18 blockdev_general.bdev_error -- bdev/blockdev.sh@502 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x2 -q 16 -o 4096 -w randread -t 5 '' 00:07:06.661 Process error testing pid: 48445 00:07:06.661 21:06:18 blockdev_general.bdev_error -- bdev/blockdev.sh@503 -- # ERR_PID=48445 00:07:06.661 21:06:18 blockdev_general.bdev_error -- bdev/blockdev.sh@504 -- # echo 'Process error testing pid: 48445' 00:07:06.661 21:06:18 blockdev_general.bdev_error -- bdev/blockdev.sh@505 -- # waitforlisten 48445 00:07:06.661 21:06:18 blockdev_general.bdev_error -- common/autotest_common.sh@829 -- # '[' -z 48445 ']' 00:07:06.661 21:06:18 blockdev_general.bdev_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:06.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:06.661 21:06:18 blockdev_general.bdev_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:06.661 21:06:18 blockdev_general.bdev_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:06.661 21:06:18 blockdev_general.bdev_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:06.661 21:06:18 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:06.661 [2024-07-14 21:06:18.070959] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:06.661 [2024-07-14 21:06:18.071118] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:07.225 EAL: TSC is not safe to use in SMP mode 00:07:07.225 EAL: TSC is not invariant 00:07:07.225 [2024-07-14 21:06:18.565973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.225 [2024-07-14 21:06:18.641355] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:07.225 [2024-07-14 21:06:18.643776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@862 -- # return 0 00:07:07.791 21:06:19 blockdev_general.bdev_error -- bdev/blockdev.sh@507 -- # rpc_cmd bdev_malloc_create -b Dev_1 128 512 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:07.791 Dev_1 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.791 21:06:19 blockdev_general.bdev_error -- bdev/blockdev.sh@508 -- # waitforbdev Dev_1 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_1 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_1 -t 2000 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:07.791 [ 00:07:07.791 { 00:07:07.791 "name": "Dev_1", 00:07:07.791 "aliases": [ 00:07:07.791 "ea6ce0b9-4224-11ef-aa83-81fbc7dfef58" 00:07:07.791 ], 00:07:07.791 "product_name": "Malloc disk", 00:07:07.791 "block_size": 512, 00:07:07.791 "num_blocks": 262144, 00:07:07.791 "uuid": "ea6ce0b9-4224-11ef-aa83-81fbc7dfef58", 00:07:07.791 "assigned_rate_limits": { 00:07:07.791 "rw_ios_per_sec": 0, 00:07:07.791 "rw_mbytes_per_sec": 0, 00:07:07.791 "r_mbytes_per_sec": 0, 00:07:07.791 "w_mbytes_per_sec": 0 00:07:07.791 }, 00:07:07.791 "claimed": false, 00:07:07.791 "zoned": false, 00:07:07.791 "supported_io_types": { 00:07:07.791 "read": true, 00:07:07.791 "write": true, 00:07:07.791 "unmap": true, 00:07:07.791 "flush": true, 00:07:07.791 "reset": true, 00:07:07.791 "nvme_admin": false, 00:07:07.791 "nvme_io": false, 00:07:07.791 "nvme_io_md": false, 00:07:07.791 "write_zeroes": true, 00:07:07.791 "zcopy": true, 00:07:07.791 "get_zone_info": false, 00:07:07.791 "zone_management": false, 00:07:07.791 "zone_append": false, 00:07:07.791 "compare": false, 00:07:07.791 "compare_and_write": false, 00:07:07.791 "abort": true, 00:07:07.791 "seek_hole": false, 00:07:07.791 "seek_data": false, 00:07:07.791 "copy": true, 00:07:07.791 "nvme_iov_md": false 00:07:07.791 }, 00:07:07.791 "memory_domains": [ 00:07:07.791 { 00:07:07.791 "dma_device_id": "system", 00:07:07.791 "dma_device_type": 1 00:07:07.791 }, 00:07:07.791 { 00:07:07.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.791 "dma_device_type": 2 00:07:07.791 } 00:07:07.791 ], 00:07:07.791 "driver_specific": {} 00:07:07.791 } 00:07:07.791 ] 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:07:07.791 21:06:19 blockdev_general.bdev_error -- bdev/blockdev.sh@509 -- # rpc_cmd bdev_error_create Dev_1 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:07.791 true 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.791 21:06:19 blockdev_general.bdev_error -- bdev/blockdev.sh@510 -- # rpc_cmd bdev_malloc_create -b Dev_2 128 512 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:07.791 Dev_2 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.791 21:06:19 blockdev_general.bdev_error -- bdev/blockdev.sh@511 -- # waitforbdev Dev_2 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@897 -- # local bdev_name=Dev_2 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@899 -- # local i 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Dev_2 -t 2000 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:07.791 [ 00:07:07.791 { 00:07:07.791 "name": "Dev_2", 00:07:07.791 "aliases": [ 00:07:07.791 "ea72fa99-4224-11ef-aa83-81fbc7dfef58" 00:07:07.791 ], 00:07:07.791 "product_name": "Malloc disk", 00:07:07.791 "block_size": 512, 00:07:07.791 "num_blocks": 262144, 00:07:07.791 "uuid": "ea72fa99-4224-11ef-aa83-81fbc7dfef58", 00:07:07.791 "assigned_rate_limits": { 00:07:07.791 "rw_ios_per_sec": 0, 00:07:07.791 "rw_mbytes_per_sec": 0, 00:07:07.791 "r_mbytes_per_sec": 0, 00:07:07.791 "w_mbytes_per_sec": 0 00:07:07.791 }, 00:07:07.791 "claimed": false, 00:07:07.791 "zoned": false, 00:07:07.791 "supported_io_types": { 00:07:07.791 "read": true, 00:07:07.791 "write": true, 00:07:07.791 "unmap": true, 00:07:07.791 "flush": true, 00:07:07.791 "reset": true, 00:07:07.791 "nvme_admin": false, 00:07:07.791 "nvme_io": false, 00:07:07.791 "nvme_io_md": false, 00:07:07.791 "write_zeroes": true, 00:07:07.791 "zcopy": true, 00:07:07.791 "get_zone_info": false, 00:07:07.791 "zone_management": false, 00:07:07.791 "zone_append": false, 00:07:07.791 "compare": false, 00:07:07.791 "compare_and_write": false, 00:07:07.791 "abort": true, 00:07:07.791 "seek_hole": false, 00:07:07.791 "seek_data": false, 00:07:07.791 "copy": true, 00:07:07.791 "nvme_iov_md": false 00:07:07.791 }, 00:07:07.791 "memory_domains": [ 00:07:07.791 { 00:07:07.791 "dma_device_id": "system", 00:07:07.791 "dma_device_type": 1 00:07:07.791 }, 00:07:07.791 { 00:07:07.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:07.791 "dma_device_type": 2 00:07:07.791 } 00:07:07.791 ], 00:07:07.791 "driver_specific": {} 00:07:07.791 } 00:07:07.791 ] 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.791 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@905 -- # return 0 00:07:07.791 21:06:19 blockdev_general.bdev_error -- bdev/blockdev.sh@512 -- # rpc_cmd bdev_error_inject_error EE_Dev_1 all failure -n 5 00:07:07.792 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:07.792 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:07.792 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:07.792 21:06:19 blockdev_general.bdev_error -- bdev/blockdev.sh@515 -- # NOT wait 48445 00:07:07.792 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@648 -- # local es=0 00:07:07.792 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@650 -- # valid_exec_arg wait 48445 00:07:07.792 21:06:19 blockdev_general.bdev_error -- bdev/blockdev.sh@514 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -t 1 perform_tests 00:07:07.792 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@636 -- # local arg=wait 00:07:07.792 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.792 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # type -t wait 00:07:07.792 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:07.792 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # wait 48445 00:07:07.792 Running I/O for 5 seconds... 00:07:07.792 task offset: 211104 on job bdev=EE_Dev_1 fails 00:07:07.792 00:07:07.792 Latency(us) 00:07:07.792 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:07.792 Job: EE_Dev_1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:07:07.792 Job: EE_Dev_1 ended in about 0.00 seconds with error 00:07:07.792 EE_Dev_1 : 0.00 183333.33 716.15 41666.67 0.00 57.12 20.83 110.31 00:07:07.792 Job: Dev_2 (Core Mask 0x2, workload: randread, depth: 16, IO size: 4096) 00:07:07.792 Dev_2 : 0.00 226950.35 886.52 0.00 0.00 31.35 21.41 44.92 00:07:07.792 =================================================================================================================== 00:07:07.792 Total : 410283.69 1602.67 41666.67 0.00 43.14 20.83 110.31 00:07:07.792 [2024-07-14 21:06:19.273465] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:07.792 request: 00:07:07.792 { 00:07:07.792 "method": "perform_tests", 00:07:07.792 "req_id": 1 00:07:07.792 } 00:07:07.792 Got JSON-RPC error response 00:07:07.792 response: 00:07:07.792 { 00:07:07.792 "code": -32603, 00:07:07.792 "message": "bdevperf failed with error Operation not permitted" 00:07:07.792 } 00:07:08.050 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@651 -- # es=255 00:07:08.050 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:08.050 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@660 -- # es=127 00:07:08.050 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@661 -- # case "$es" in 00:07:08.050 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@668 -- # es=1 00:07:08.050 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:08.050 00:07:08.050 real 0m8.796s 00:07:08.050 user 0m8.634s 00:07:08.050 sys 0m1.400s 00:07:08.050 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:08.050 21:06:19 blockdev_general.bdev_error -- common/autotest_common.sh@10 -- # set +x 00:07:08.050 ************************************ 00:07:08.050 END TEST bdev_error 00:07:08.050 ************************************ 00:07:08.050 21:06:19 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:08.050 21:06:19 blockdev_general -- bdev/blockdev.sh@791 -- # run_test bdev_stat stat_test_suite '' 00:07:08.050 21:06:19 blockdev_general -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:08.050 21:06:19 blockdev_general -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:08.050 21:06:19 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:08.050 ************************************ 00:07:08.050 START TEST bdev_stat 00:07:08.050 ************************************ 00:07:08.050 21:06:19 blockdev_general.bdev_stat -- common/autotest_common.sh@1123 -- # stat_test_suite '' 00:07:08.050 21:06:19 blockdev_general.bdev_stat -- bdev/blockdev.sh@592 -- # STAT_DEV=Malloc_STAT 00:07:08.050 21:06:19 blockdev_general.bdev_stat -- bdev/blockdev.sh@596 -- # STAT_PID=48472 00:07:08.050 Process Bdev IO statistics testing pid: 48472 00:07:08.050 21:06:19 blockdev_general.bdev_stat -- bdev/blockdev.sh@597 -- # echo 'Process Bdev IO statistics testing pid: 48472' 00:07:08.050 21:06:19 blockdev_general.bdev_stat -- bdev/blockdev.sh@598 -- # trap 'cleanup; killprocess $STAT_PID; exit 1' SIGINT SIGTERM EXIT 00:07:08.050 21:06:19 blockdev_general.bdev_stat -- bdev/blockdev.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -m 0x3 -q 256 -o 4096 -w randread -t 10 -C '' 00:07:08.050 21:06:19 blockdev_general.bdev_stat -- bdev/blockdev.sh@599 -- # waitforlisten 48472 00:07:08.050 21:06:19 blockdev_general.bdev_stat -- common/autotest_common.sh@829 -- # '[' -z 48472 ']' 00:07:08.050 21:06:19 blockdev_general.bdev_stat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.050 21:06:19 blockdev_general.bdev_stat -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:08.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.050 21:06:19 blockdev_general.bdev_stat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.050 21:06:19 blockdev_general.bdev_stat -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:08.050 21:06:19 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:08.050 [2024-07-14 21:06:19.543665] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:08.050 [2024-07-14 21:06:19.543931] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:08.617 EAL: TSC is not safe to use in SMP mode 00:07:08.617 EAL: TSC is not invariant 00:07:08.617 [2024-07-14 21:06:20.105114] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:08.878 [2024-07-14 21:06:20.204226] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:08.878 [2024-07-14 21:06:20.204292] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:07:08.878 [2024-07-14 21:06:20.207727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.878 [2024-07-14 21:06:20.207716] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@862 -- # return 0 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- bdev/blockdev.sh@601 -- # rpc_cmd bdev_malloc_create -b Malloc_STAT 128 512 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:09.152 Malloc_STAT 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- bdev/blockdev.sh@602 -- # waitforbdev Malloc_STAT 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@897 -- # local bdev_name=Malloc_STAT 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@899 -- # local i 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@902 -- # rpc_cmd bdev_wait_for_examine 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@904 -- # rpc_cmd bdev_get_bdevs -b Malloc_STAT -t 2000 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:09.152 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:09.153 [ 00:07:09.153 { 00:07:09.153 "name": "Malloc_STAT", 00:07:09.153 "aliases": [ 00:07:09.153 "eb51926d-4224-11ef-aa83-81fbc7dfef58" 00:07:09.153 ], 00:07:09.153 "product_name": "Malloc disk", 00:07:09.153 "block_size": 512, 00:07:09.153 "num_blocks": 262144, 00:07:09.153 "uuid": "eb51926d-4224-11ef-aa83-81fbc7dfef58", 00:07:09.153 "assigned_rate_limits": { 00:07:09.153 "rw_ios_per_sec": 0, 00:07:09.153 "rw_mbytes_per_sec": 0, 00:07:09.153 "r_mbytes_per_sec": 0, 00:07:09.153 "w_mbytes_per_sec": 0 00:07:09.153 }, 00:07:09.153 "claimed": false, 00:07:09.153 "zoned": false, 00:07:09.153 "supported_io_types": { 00:07:09.153 "read": true, 00:07:09.153 "write": true, 00:07:09.153 "unmap": true, 00:07:09.153 "flush": true, 00:07:09.153 "reset": true, 00:07:09.153 "nvme_admin": false, 00:07:09.153 "nvme_io": false, 00:07:09.153 "nvme_io_md": false, 00:07:09.153 "write_zeroes": true, 00:07:09.153 "zcopy": true, 00:07:09.153 "get_zone_info": false, 00:07:09.153 "zone_management": false, 00:07:09.153 "zone_append": false, 00:07:09.153 "compare": false, 00:07:09.153 "compare_and_write": false, 00:07:09.153 "abort": true, 00:07:09.153 "seek_hole": false, 00:07:09.153 "seek_data": false, 00:07:09.153 "copy": true, 00:07:09.153 "nvme_iov_md": false 00:07:09.153 }, 00:07:09.153 "memory_domains": [ 00:07:09.153 { 00:07:09.153 "dma_device_id": "system", 00:07:09.153 "dma_device_type": 1 00:07:09.153 }, 00:07:09.153 { 00:07:09.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:09.153 "dma_device_type": 2 00:07:09.153 } 00:07:09.153 ], 00:07:09.153 "driver_specific": {} 00:07:09.153 } 00:07:09.153 ] 00:07:09.153 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:09.153 21:06:20 blockdev_general.bdev_stat -- common/autotest_common.sh@905 -- # return 0 00:07:09.153 21:06:20 blockdev_general.bdev_stat -- bdev/blockdev.sh@605 -- # sleep 2 00:07:09.153 21:06:20 blockdev_general.bdev_stat -- bdev/blockdev.sh@604 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests 00:07:09.409 Running I/O for 10 seconds... 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@606 -- # stat_function_test Malloc_STAT 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@559 -- # local bdev_name=Malloc_STAT 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@560 -- # local iostats 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@561 -- # local io_count1 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@562 -- # local io_count2 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@563 -- # local iostats_per_channel 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@564 -- # local io_count_per_channel1 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@565 -- # local io_count_per_channel2 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@566 -- # local io_count_per_channel_all=0 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@568 -- # iostats='{ 00:07:11.302 "tick_rate": 2199999327, 00:07:11.302 "ticks": 732936786039, 00:07:11.302 "bdevs": [ 00:07:11.302 { 00:07:11.302 "name": "Malloc_STAT", 00:07:11.302 "bytes_read": 13622088192, 00:07:11.302 "num_read_ops": 3325699, 00:07:11.302 "bytes_written": 0, 00:07:11.302 "num_write_ops": 0, 00:07:11.302 "bytes_unmapped": 0, 00:07:11.302 "num_unmap_ops": 0, 00:07:11.302 "bytes_copied": 0, 00:07:11.302 "num_copy_ops": 0, 00:07:11.302 "read_latency_ticks": 2192376575701, 00:07:11.302 "max_read_latency_ticks": 1439788, 00:07:11.302 "min_read_latency_ticks": 42146, 00:07:11.302 "write_latency_ticks": 0, 00:07:11.302 "max_write_latency_ticks": 0, 00:07:11.302 "min_write_latency_ticks": 0, 00:07:11.302 "unmap_latency_ticks": 0, 00:07:11.302 "max_unmap_latency_ticks": 0, 00:07:11.302 "min_unmap_latency_ticks": 0, 00:07:11.302 "copy_latency_ticks": 0, 00:07:11.302 "max_copy_latency_ticks": 0, 00:07:11.302 "min_copy_latency_ticks": 0, 00:07:11.302 "io_error": {} 00:07:11.302 } 00:07:11.302 ] 00:07:11.302 }' 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # jq -r '.bdevs[0].num_read_ops' 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@569 -- # io_count1=3325699 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT -c 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.302 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@571 -- # iostats_per_channel='{ 00:07:11.302 "tick_rate": 2199999327, 00:07:11.302 "ticks": 733002239189, 00:07:11.302 "name": "Malloc_STAT", 00:07:11.302 "channels": [ 00:07:11.302 { 00:07:11.302 "thread_id": 2, 00:07:11.302 "bytes_read": 6789529600, 00:07:11.302 "num_read_ops": 1657600, 00:07:11.302 "bytes_written": 0, 00:07:11.302 "num_write_ops": 0, 00:07:11.302 "bytes_unmapped": 0, 00:07:11.302 "num_unmap_ops": 0, 00:07:11.302 "bytes_copied": 0, 00:07:11.302 "num_copy_ops": 0, 00:07:11.302 "read_latency_ticks": 1112845107589, 00:07:11.302 "max_read_latency_ticks": 1439788, 00:07:11.302 "min_read_latency_ticks": 568556, 00:07:11.302 "write_latency_ticks": 0, 00:07:11.302 "max_write_latency_ticks": 0, 00:07:11.302 "min_write_latency_ticks": 0, 00:07:11.302 "unmap_latency_ticks": 0, 00:07:11.302 "max_unmap_latency_ticks": 0, 00:07:11.302 "min_unmap_latency_ticks": 0, 00:07:11.302 "copy_latency_ticks": 0, 00:07:11.302 "max_copy_latency_ticks": 0, 00:07:11.302 "min_copy_latency_ticks": 0 00:07:11.302 }, 00:07:11.302 { 00:07:11.302 "thread_id": 3, 00:07:11.302 "bytes_read": 6998196224, 00:07:11.302 "num_read_ops": 1708544, 00:07:11.302 "bytes_written": 0, 00:07:11.302 "num_write_ops": 0, 00:07:11.302 "bytes_unmapped": 0, 00:07:11.302 "num_unmap_ops": 0, 00:07:11.302 "bytes_copied": 0, 00:07:11.303 "num_copy_ops": 0, 00:07:11.303 "read_latency_ticks": 1112983563718, 00:07:11.303 "max_read_latency_ticks": 1332882, 00:07:11.303 "min_read_latency_ticks": 557302, 00:07:11.303 "write_latency_ticks": 0, 00:07:11.303 "max_write_latency_ticks": 0, 00:07:11.303 "min_write_latency_ticks": 0, 00:07:11.303 "unmap_latency_ticks": 0, 00:07:11.303 "max_unmap_latency_ticks": 0, 00:07:11.303 "min_unmap_latency_ticks": 0, 00:07:11.303 "copy_latency_ticks": 0, 00:07:11.303 "max_copy_latency_ticks": 0, 00:07:11.303 "min_copy_latency_ticks": 0 00:07:11.303 } 00:07:11.303 ] 00:07:11.303 }' 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # jq -r '.channels[0].num_read_ops' 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@572 -- # io_count_per_channel1=1657600 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@573 -- # io_count_per_channel_all=1657600 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # jq -r '.channels[1].num_read_ops' 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@574 -- # io_count_per_channel2=1708544 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@575 -- # io_count_per_channel_all=3366144 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # rpc_cmd bdev_get_iostat -b Malloc_STAT 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@577 -- # iostats='{ 00:07:11.303 "tick_rate": 2199999327, 00:07:11.303 "ticks": 733093224931, 00:07:11.303 "bdevs": [ 00:07:11.303 { 00:07:11.303 "name": "Malloc_STAT", 00:07:11.303 "bytes_read": 14022644224, 00:07:11.303 "num_read_ops": 3423491, 00:07:11.303 "bytes_written": 0, 00:07:11.303 "num_write_ops": 0, 00:07:11.303 "bytes_unmapped": 0, 00:07:11.303 "num_unmap_ops": 0, 00:07:11.303 "bytes_copied": 0, 00:07:11.303 "num_copy_ops": 0, 00:07:11.303 "read_latency_ticks": 2272406332991, 00:07:11.303 "max_read_latency_ticks": 1527370, 00:07:11.303 "min_read_latency_ticks": 42146, 00:07:11.303 "write_latency_ticks": 0, 00:07:11.303 "max_write_latency_ticks": 0, 00:07:11.303 "min_write_latency_ticks": 0, 00:07:11.303 "unmap_latency_ticks": 0, 00:07:11.303 "max_unmap_latency_ticks": 0, 00:07:11.303 "min_unmap_latency_ticks": 0, 00:07:11.303 "copy_latency_ticks": 0, 00:07:11.303 "max_copy_latency_ticks": 0, 00:07:11.303 "min_copy_latency_ticks": 0, 00:07:11.303 "io_error": {} 00:07:11.303 } 00:07:11.303 ] 00:07:11.303 }' 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # jq -r '.bdevs[0].num_read_ops' 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@578 -- # io_count2=3423491 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3366144 -lt 3325699 ']' 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@583 -- # '[' 3366144 -gt 3423491 ']' 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@608 -- # rpc_cmd bdev_malloc_delete Malloc_STAT 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:11.303 00:07:11.303 Latency(us) 00:07:11.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.303 Job: Malloc_STAT (Core Mask 0x1, workload: randread, depth: 256, IO size: 4096) 00:07:11.303 Malloc_STAT : 2.04 831141.19 3246.65 0.00 0.00 307.76 110.31 696.32 00:07:11.303 Job: Malloc_STAT (Core Mask 0x2, workload: randread, depth: 256, IO size: 4096) 00:07:11.303 Malloc_STAT : 2.04 859965.34 3359.24 0.00 0.00 297.45 66.56 606.95 00:07:11.303 =================================================================================================================== 00:07:11.303 Total : 1691106.52 6605.88 0.00 0.00 302.52 66.56 696.32 00:07:11.303 0 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- bdev/blockdev.sh@609 -- # killprocess 48472 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@948 -- # '[' -z 48472 ']' 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@952 -- # kill -0 48472 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # uname 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # ps -c -o command 48472 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # tail -1 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:11.303 killing process with pid 48472 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48472' 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@967 -- # kill 48472 00:07:11.303 Received shutdown signal, test time was about 2.082977 seconds 00:07:11.303 00:07:11.303 Latency(us) 00:07:11.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:11.303 =================================================================================================================== 00:07:11.303 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:11.303 21:06:22 blockdev_general.bdev_stat -- common/autotest_common.sh@972 -- # wait 48472 00:07:11.559 21:06:23 blockdev_general.bdev_stat -- bdev/blockdev.sh@610 -- # trap - SIGINT SIGTERM EXIT 00:07:11.559 00:07:11.559 real 0m3.486s 00:07:11.559 user 0m6.147s 00:07:11.559 sys 0m0.787s 00:07:11.559 21:06:23 blockdev_general.bdev_stat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.559 21:06:23 blockdev_general.bdev_stat -- common/autotest_common.sh@10 -- # set +x 00:07:11.559 ************************************ 00:07:11.559 END TEST bdev_stat 00:07:11.559 ************************************ 00:07:11.559 21:06:23 blockdev_general -- common/autotest_common.sh@1142 -- # return 0 00:07:11.559 21:06:23 blockdev_general -- bdev/blockdev.sh@794 -- # [[ bdev == gpt ]] 00:07:11.559 21:06:23 blockdev_general -- bdev/blockdev.sh@798 -- # [[ bdev == crypto_sw ]] 00:07:11.559 21:06:23 blockdev_general -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:07:11.559 21:06:23 blockdev_general -- bdev/blockdev.sh@811 -- # cleanup 00:07:11.559 21:06:23 blockdev_general -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:11.559 21:06:23 blockdev_general -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:11.559 21:06:23 blockdev_general -- bdev/blockdev.sh@26 -- # [[ bdev == rbd ]] 00:07:11.559 21:06:23 blockdev_general -- bdev/blockdev.sh@30 -- # [[ bdev == daos ]] 00:07:11.559 21:06:23 blockdev_general -- bdev/blockdev.sh@34 -- # [[ bdev = \g\p\t ]] 00:07:11.559 21:06:23 blockdev_general -- bdev/blockdev.sh@40 -- # [[ bdev == xnvme ]] 00:07:11.559 00:07:11.559 real 1m32.505s 00:07:11.559 user 4m30.070s 00:07:11.559 sys 0m27.491s 00:07:11.559 21:06:23 blockdev_general -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:11.559 21:06:23 blockdev_general -- common/autotest_common.sh@10 -- # set +x 00:07:11.559 ************************************ 00:07:11.559 END TEST blockdev_general 00:07:11.559 ************************************ 00:07:11.559 21:06:23 -- common/autotest_common.sh@1142 -- # return 0 00:07:11.559 21:06:23 -- spdk/autotest.sh@190 -- # run_test bdev_raid /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:11.559 21:06:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.819 21:06:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.819 21:06:23 -- common/autotest_common.sh@10 -- # set +x 00:07:11.819 ************************************ 00:07:11.819 START TEST bdev_raid 00:07:11.819 ************************************ 00:07:11.819 21:06:23 bdev_raid -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh 00:07:11.819 * Looking for test storage... 00:07:11.819 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:07:11.819 21:06:23 bdev_raid -- bdev/bdev_raid.sh@13 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:07:11.819 21:06:23 bdev_raid -- bdev/nbd_common.sh@6 -- # set -e 00:07:11.819 21:06:23 bdev_raid -- bdev/bdev_raid.sh@15 -- # rpc_py='/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock' 00:07:11.820 21:06:23 bdev_raid -- bdev/bdev_raid.sh@851 -- # mkdir -p /raidtest 00:07:11.820 21:06:23 bdev_raid -- bdev/bdev_raid.sh@852 -- # trap 'cleanup; exit 1' EXIT 00:07:11.820 21:06:23 bdev_raid -- bdev/bdev_raid.sh@854 -- # base_blocklen=512 00:07:11.820 21:06:23 bdev_raid -- bdev/bdev_raid.sh@856 -- # uname -s 00:07:11.820 21:06:23 bdev_raid -- bdev/bdev_raid.sh@856 -- # '[' FreeBSD = Linux ']' 00:07:11.820 21:06:23 bdev_raid -- bdev/bdev_raid.sh@863 -- # run_test raid0_resize_test raid0_resize_test 00:07:11.820 21:06:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:07:11.820 21:06:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:11.820 21:06:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:11.820 ************************************ 00:07:11.820 START TEST raid0_resize_test 00:07:11.820 ************************************ 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1123 -- # raid0_resize_test 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@347 -- # local blksize=512 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@348 -- # local bdev_size_mb=32 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@349 -- # local new_bdev_size_mb=64 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@350 -- # local blkcnt 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@351 -- # local raid_size_mb 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@352 -- # local new_raid_size_mb 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@355 -- # raid_pid=48577 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@356 -- # echo 'Process raid pid: 48577' 00:07:11.820 Process raid pid: 48577 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@357 -- # waitforlisten 48577 /var/tmp/spdk-raid.sock 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@354 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@829 -- # '[' -z 48577 ']' 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:11.820 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:11.820 21:06:23 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:11.820 [2024-07-14 21:06:23.303920] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:11.820 [2024-07-14 21:06:23.304085] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:12.385 EAL: TSC is not safe to use in SMP mode 00:07:12.385 EAL: TSC is not invariant 00:07:12.385 [2024-07-14 21:06:23.823796] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.385 [2024-07-14 21:06:23.899054] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:12.385 [2024-07-14 21:06:23.901344] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:12.385 [2024-07-14 21:06:23.902201] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.385 [2024-07-14 21:06:23.902214] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:12.949 21:06:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:12.950 21:06:24 bdev_raid.raid0_resize_test -- common/autotest_common.sh@862 -- # return 0 00:07:12.950 21:06:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@359 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_1 32 512 00:07:12.950 Base_1 00:07:13.207 21:06:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@360 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_create Base_2 32 512 00:07:13.207 Base_2 00:07:13.207 21:06:24 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@362 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r 0 -b 'Base_1 Base_2' -n Raid 00:07:13.464 [2024-07-14 21:06:24.993308] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_1 is claimed 00:07:13.464 [2024-07-14 21:06:24.993937] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev Base_2 is claimed 00:07:13.464 [2024-07-14 21:06:24.993962] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x16211c434a00 00:07:13.464 [2024-07-14 21:06:24.993966] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:13.464 [2024-07-14 21:06:24.993998] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x16211c497e20 00:07:13.464 [2024-07-14 21:06:24.994058] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x16211c434a00 00:07:13.464 [2024-07-14 21:06:24.994062] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Raid, raid_bdev 0x16211c434a00 00:07:13.464 [2024-07-14 21:06:24.994095] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:13.464 21:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@365 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_1 64 00:07:13.722 [2024-07-14 21:06:25.197376] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:13.722 [2024-07-14 21:06:25.197424] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_1' was resized: old size 65536, new size 131072 00:07:13.722 true 00:07:13.722 21:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:07:13.722 21:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # jq '.[].num_blocks' 00:07:13.980 [2024-07-14 21:06:25.397419] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:13.980 21:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@368 -- # blkcnt=131072 00:07:13.980 21:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@369 -- # raid_size_mb=64 00:07:13.980 21:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@370 -- # '[' 64 '!=' 64 ']' 00:07:13.980 21:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@376 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_null_resize Base_2 64 00:07:14.237 [2024-07-14 21:06:25.665348] bdev_raid.c:2262:raid_bdev_resize_base_bdev: *DEBUG*: raid_bdev_resize_base_bdev 00:07:14.237 [2024-07-14 21:06:25.665366] bdev_raid.c:2276:raid_bdev_resize_base_bdev: *NOTICE*: base_bdev 'Base_2' was resized: old size 65536, new size 131072 00:07:14.237 [2024-07-14 21:06:25.665422] bdev_raid.c:2290:raid_bdev_resize_base_bdev: *NOTICE*: raid bdev 'Raid': block count was changed from 131072 to 262144 00:07:14.237 true 00:07:14.238 21:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Raid 00:07:14.238 21:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # jq '.[].num_blocks' 00:07:14.496 [2024-07-14 21:06:25.869360] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:14.496 21:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@379 -- # blkcnt=262144 00:07:14.496 21:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@380 -- # raid_size_mb=128 00:07:14.496 21:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@381 -- # '[' 128 '!=' 128 ']' 00:07:14.496 21:06:25 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@386 -- # killprocess 48577 00:07:14.496 21:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@948 -- # '[' -z 48577 ']' 00:07:14.496 21:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@952 -- # kill -0 48577 00:07:14.496 21:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # uname 00:07:14.496 21:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:14.496 21:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # ps -c -o command 48577 00:07:14.496 21:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # tail -1 00:07:14.496 21:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:14.496 21:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:14.496 killing process with pid 48577 00:07:14.496 21:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48577' 00:07:14.496 21:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@967 -- # kill 48577 00:07:14.496 21:06:25 bdev_raid.raid0_resize_test -- common/autotest_common.sh@972 -- # wait 48577 00:07:14.496 [2024-07-14 21:06:25.896915] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:14.496 [2024-07-14 21:06:25.896938] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:14.496 [2024-07-14 21:06:25.896951] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:14.496 [2024-07-14 21:06:25.896955] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x16211c434a00 name Raid, state offline 00:07:14.496 [2024-07-14 21:06:25.897147] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:14.755 21:06:26 bdev_raid.raid0_resize_test -- bdev/bdev_raid.sh@388 -- # return 0 00:07:14.755 00:07:14.755 real 0m2.830s 00:07:14.755 user 0m4.114s 00:07:14.755 sys 0m0.762s 00:07:14.755 ************************************ 00:07:14.755 END TEST raid0_resize_test 00:07:14.755 21:06:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:14.755 21:06:26 bdev_raid.raid0_resize_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.755 ************************************ 00:07:14.755 21:06:26 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:14.755 21:06:26 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:07:14.755 21:06:26 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:07:14.755 21:06:26 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 2 false 00:07:14.755 21:06:26 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:14.755 21:06:26 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:14.755 21:06:26 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:14.755 ************************************ 00:07:14.755 START TEST raid_state_function_test 00:07:14.755 ************************************ 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 false 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=48623 00:07:14.755 Process raid pid: 48623 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48623' 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 48623 /var/tmp/spdk-raid.sock 00:07:14.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 48623 ']' 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:14.755 21:06:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:14.755 [2024-07-14 21:06:26.188100] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:14.755 [2024-07-14 21:06:26.188394] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:15.321 EAL: TSC is not safe to use in SMP mode 00:07:15.321 EAL: TSC is not invariant 00:07:15.321 [2024-07-14 21:06:26.858955] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:15.580 [2024-07-14 21:06:26.957066] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:15.580 [2024-07-14 21:06:26.959309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:15.580 [2024-07-14 21:06:26.960070] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.580 [2024-07-14 21:06:26.960077] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:15.839 21:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:15.839 21:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:07:15.839 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:16.097 [2024-07-14 21:06:27.422565] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:16.097 [2024-07-14 21:06:27.422630] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:16.097 [2024-07-14 21:06:27.422635] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:16.097 [2024-07-14 21:06:27.422651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:16.097 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:16.097 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:16.097 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:16.097 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:16.097 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:16.097 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:16.097 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:16.097 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:16.097 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:16.097 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:16.097 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:16.097 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:16.355 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:16.355 "name": "Existed_Raid", 00:07:16.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.355 "strip_size_kb": 64, 00:07:16.355 "state": "configuring", 00:07:16.355 "raid_level": "raid0", 00:07:16.355 "superblock": false, 00:07:16.355 "num_base_bdevs": 2, 00:07:16.355 "num_base_bdevs_discovered": 0, 00:07:16.355 "num_base_bdevs_operational": 2, 00:07:16.355 "base_bdevs_list": [ 00:07:16.355 { 00:07:16.355 "name": "BaseBdev1", 00:07:16.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.355 "is_configured": false, 00:07:16.355 "data_offset": 0, 00:07:16.355 "data_size": 0 00:07:16.355 }, 00:07:16.355 { 00:07:16.355 "name": "BaseBdev2", 00:07:16.355 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:16.355 "is_configured": false, 00:07:16.355 "data_offset": 0, 00:07:16.355 "data_size": 0 00:07:16.355 } 00:07:16.355 ] 00:07:16.355 }' 00:07:16.355 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:16.356 21:06:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:16.614 21:06:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:16.614 [2024-07-14 21:06:28.162520] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:16.614 [2024-07-14 21:06:28.162545] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2263c2834500 name Existed_Raid, state configuring 00:07:16.873 21:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:16.873 [2024-07-14 21:06:28.374546] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:16.873 [2024-07-14 21:06:28.374623] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:16.873 [2024-07-14 21:06:28.374628] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:16.873 [2024-07-14 21:06:28.374645] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:16.873 21:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:17.134 [2024-07-14 21:06:28.619568] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:17.134 BaseBdev1 00:07:17.134 21:06:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:17.134 21:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:07:17.134 21:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:17.134 21:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:17.134 21:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:17.134 21:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:17.134 21:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:17.399 21:06:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:17.657 [ 00:07:17.657 { 00:07:17.657 "name": "BaseBdev1", 00:07:17.657 "aliases": [ 00:07:17.657 "f018c41f-4224-11ef-aa83-81fbc7dfef58" 00:07:17.657 ], 00:07:17.657 "product_name": "Malloc disk", 00:07:17.657 "block_size": 512, 00:07:17.657 "num_blocks": 65536, 00:07:17.657 "uuid": "f018c41f-4224-11ef-aa83-81fbc7dfef58", 00:07:17.657 "assigned_rate_limits": { 00:07:17.657 "rw_ios_per_sec": 0, 00:07:17.657 "rw_mbytes_per_sec": 0, 00:07:17.657 "r_mbytes_per_sec": 0, 00:07:17.657 "w_mbytes_per_sec": 0 00:07:17.657 }, 00:07:17.657 "claimed": true, 00:07:17.657 "claim_type": "exclusive_write", 00:07:17.657 "zoned": false, 00:07:17.657 "supported_io_types": { 00:07:17.657 "read": true, 00:07:17.657 "write": true, 00:07:17.657 "unmap": true, 00:07:17.657 "flush": true, 00:07:17.657 "reset": true, 00:07:17.657 "nvme_admin": false, 00:07:17.657 "nvme_io": false, 00:07:17.657 "nvme_io_md": false, 00:07:17.657 "write_zeroes": true, 00:07:17.657 "zcopy": true, 00:07:17.657 "get_zone_info": false, 00:07:17.657 "zone_management": false, 00:07:17.657 "zone_append": false, 00:07:17.657 "compare": false, 00:07:17.657 "compare_and_write": false, 00:07:17.657 "abort": true, 00:07:17.657 "seek_hole": false, 00:07:17.657 "seek_data": false, 00:07:17.657 "copy": true, 00:07:17.657 "nvme_iov_md": false 00:07:17.657 }, 00:07:17.657 "memory_domains": [ 00:07:17.657 { 00:07:17.657 "dma_device_id": "system", 00:07:17.657 "dma_device_type": 1 00:07:17.657 }, 00:07:17.657 { 00:07:17.657 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:17.657 "dma_device_type": 2 00:07:17.657 } 00:07:17.657 ], 00:07:17.657 "driver_specific": {} 00:07:17.657 } 00:07:17.657 ] 00:07:17.657 21:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:17.657 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:17.657 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:17.657 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:17.657 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:17.657 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:17.657 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:17.657 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:17.657 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:17.657 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:17.657 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:17.657 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:17.657 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:17.916 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:17.916 "name": "Existed_Raid", 00:07:17.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.916 "strip_size_kb": 64, 00:07:17.916 "state": "configuring", 00:07:17.916 "raid_level": "raid0", 00:07:17.916 "superblock": false, 00:07:17.916 "num_base_bdevs": 2, 00:07:17.916 "num_base_bdevs_discovered": 1, 00:07:17.916 "num_base_bdevs_operational": 2, 00:07:17.916 "base_bdevs_list": [ 00:07:17.916 { 00:07:17.916 "name": "BaseBdev1", 00:07:17.916 "uuid": "f018c41f-4224-11ef-aa83-81fbc7dfef58", 00:07:17.916 "is_configured": true, 00:07:17.916 "data_offset": 0, 00:07:17.916 "data_size": 65536 00:07:17.916 }, 00:07:17.916 { 00:07:17.916 "name": "BaseBdev2", 00:07:17.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:17.916 "is_configured": false, 00:07:17.916 "data_offset": 0, 00:07:17.916 "data_size": 0 00:07:17.916 } 00:07:17.916 ] 00:07:17.916 }' 00:07:17.916 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:17.916 21:06:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:18.175 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:18.434 [2024-07-14 21:06:29.822657] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:18.434 [2024-07-14 21:06:29.822700] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2263c2834500 name Existed_Raid, state configuring 00:07:18.434 21:06:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:18.693 [2024-07-14 21:06:30.018708] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:18.693 [2024-07-14 21:06:30.019718] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:18.693 [2024-07-14 21:06:30.019780] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:18.693 "name": "Existed_Raid", 00:07:18.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.693 "strip_size_kb": 64, 00:07:18.693 "state": "configuring", 00:07:18.693 "raid_level": "raid0", 00:07:18.693 "superblock": false, 00:07:18.693 "num_base_bdevs": 2, 00:07:18.693 "num_base_bdevs_discovered": 1, 00:07:18.693 "num_base_bdevs_operational": 2, 00:07:18.693 "base_bdevs_list": [ 00:07:18.693 { 00:07:18.693 "name": "BaseBdev1", 00:07:18.693 "uuid": "f018c41f-4224-11ef-aa83-81fbc7dfef58", 00:07:18.693 "is_configured": true, 00:07:18.693 "data_offset": 0, 00:07:18.693 "data_size": 65536 00:07:18.693 }, 00:07:18.693 { 00:07:18.693 "name": "BaseBdev2", 00:07:18.693 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:18.693 "is_configured": false, 00:07:18.693 "data_offset": 0, 00:07:18.693 "data_size": 0 00:07:18.693 } 00:07:18.693 ] 00:07:18.693 }' 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:18.693 21:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:19.260 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:19.260 [2024-07-14 21:06:30.726846] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:19.260 [2024-07-14 21:06:30.726876] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2263c2834a00 00:07:19.260 [2024-07-14 21:06:30.726880] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:19.260 [2024-07-14 21:06:30.726900] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2263c2897e20 00:07:19.260 [2024-07-14 21:06:30.727017] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2263c2834a00 00:07:19.260 [2024-07-14 21:06:30.727022] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2263c2834a00 00:07:19.260 [2024-07-14 21:06:30.727057] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:19.260 BaseBdev2 00:07:19.260 21:06:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:19.260 21:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:07:19.260 21:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:19.260 21:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:19.260 21:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:19.260 21:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:19.260 21:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:19.518 21:06:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:19.776 [ 00:07:19.776 { 00:07:19.776 "name": "BaseBdev2", 00:07:19.776 "aliases": [ 00:07:19.776 "f15a716d-4224-11ef-aa83-81fbc7dfef58" 00:07:19.776 ], 00:07:19.776 "product_name": "Malloc disk", 00:07:19.776 "block_size": 512, 00:07:19.776 "num_blocks": 65536, 00:07:19.776 "uuid": "f15a716d-4224-11ef-aa83-81fbc7dfef58", 00:07:19.776 "assigned_rate_limits": { 00:07:19.776 "rw_ios_per_sec": 0, 00:07:19.776 "rw_mbytes_per_sec": 0, 00:07:19.776 "r_mbytes_per_sec": 0, 00:07:19.776 "w_mbytes_per_sec": 0 00:07:19.776 }, 00:07:19.776 "claimed": true, 00:07:19.776 "claim_type": "exclusive_write", 00:07:19.776 "zoned": false, 00:07:19.776 "supported_io_types": { 00:07:19.776 "read": true, 00:07:19.776 "write": true, 00:07:19.776 "unmap": true, 00:07:19.776 "flush": true, 00:07:19.776 "reset": true, 00:07:19.776 "nvme_admin": false, 00:07:19.776 "nvme_io": false, 00:07:19.776 "nvme_io_md": false, 00:07:19.776 "write_zeroes": true, 00:07:19.776 "zcopy": true, 00:07:19.776 "get_zone_info": false, 00:07:19.776 "zone_management": false, 00:07:19.776 "zone_append": false, 00:07:19.776 "compare": false, 00:07:19.776 "compare_and_write": false, 00:07:19.776 "abort": true, 00:07:19.776 "seek_hole": false, 00:07:19.776 "seek_data": false, 00:07:19.776 "copy": true, 00:07:19.776 "nvme_iov_md": false 00:07:19.776 }, 00:07:19.776 "memory_domains": [ 00:07:19.776 { 00:07:19.776 "dma_device_id": "system", 00:07:19.776 "dma_device_type": 1 00:07:19.776 }, 00:07:19.776 { 00:07:19.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:19.776 "dma_device_type": 2 00:07:19.776 } 00:07:19.776 ], 00:07:19.776 "driver_specific": {} 00:07:19.776 } 00:07:19.776 ] 00:07:19.776 21:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:19.777 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:19.777 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:19.777 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:19.777 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:19.777 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:19.777 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:19.777 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:19.777 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:19.777 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:19.777 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:19.777 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:19.777 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:19.777 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:19.777 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:20.034 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:20.034 "name": "Existed_Raid", 00:07:20.034 "uuid": "f15a78f4-4224-11ef-aa83-81fbc7dfef58", 00:07:20.034 "strip_size_kb": 64, 00:07:20.034 "state": "online", 00:07:20.034 "raid_level": "raid0", 00:07:20.034 "superblock": false, 00:07:20.034 "num_base_bdevs": 2, 00:07:20.034 "num_base_bdevs_discovered": 2, 00:07:20.034 "num_base_bdevs_operational": 2, 00:07:20.034 "base_bdevs_list": [ 00:07:20.034 { 00:07:20.034 "name": "BaseBdev1", 00:07:20.034 "uuid": "f018c41f-4224-11ef-aa83-81fbc7dfef58", 00:07:20.034 "is_configured": true, 00:07:20.034 "data_offset": 0, 00:07:20.034 "data_size": 65536 00:07:20.034 }, 00:07:20.034 { 00:07:20.034 "name": "BaseBdev2", 00:07:20.034 "uuid": "f15a716d-4224-11ef-aa83-81fbc7dfef58", 00:07:20.034 "is_configured": true, 00:07:20.034 "data_offset": 0, 00:07:20.034 "data_size": 65536 00:07:20.034 } 00:07:20.034 ] 00:07:20.034 }' 00:07:20.034 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:20.034 21:06:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:20.292 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:20.292 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:20.292 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:20.292 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:20.292 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:20.292 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:20.292 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:20.292 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:20.550 [2024-07-14 21:06:31.934819] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:20.550 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:20.550 "name": "Existed_Raid", 00:07:20.550 "aliases": [ 00:07:20.550 "f15a78f4-4224-11ef-aa83-81fbc7dfef58" 00:07:20.550 ], 00:07:20.550 "product_name": "Raid Volume", 00:07:20.550 "block_size": 512, 00:07:20.550 "num_blocks": 131072, 00:07:20.550 "uuid": "f15a78f4-4224-11ef-aa83-81fbc7dfef58", 00:07:20.550 "assigned_rate_limits": { 00:07:20.550 "rw_ios_per_sec": 0, 00:07:20.550 "rw_mbytes_per_sec": 0, 00:07:20.550 "r_mbytes_per_sec": 0, 00:07:20.550 "w_mbytes_per_sec": 0 00:07:20.550 }, 00:07:20.550 "claimed": false, 00:07:20.550 "zoned": false, 00:07:20.550 "supported_io_types": { 00:07:20.550 "read": true, 00:07:20.550 "write": true, 00:07:20.550 "unmap": true, 00:07:20.550 "flush": true, 00:07:20.550 "reset": true, 00:07:20.550 "nvme_admin": false, 00:07:20.550 "nvme_io": false, 00:07:20.550 "nvme_io_md": false, 00:07:20.550 "write_zeroes": true, 00:07:20.550 "zcopy": false, 00:07:20.550 "get_zone_info": false, 00:07:20.550 "zone_management": false, 00:07:20.550 "zone_append": false, 00:07:20.550 "compare": false, 00:07:20.550 "compare_and_write": false, 00:07:20.550 "abort": false, 00:07:20.550 "seek_hole": false, 00:07:20.550 "seek_data": false, 00:07:20.550 "copy": false, 00:07:20.550 "nvme_iov_md": false 00:07:20.550 }, 00:07:20.551 "memory_domains": [ 00:07:20.551 { 00:07:20.551 "dma_device_id": "system", 00:07:20.551 "dma_device_type": 1 00:07:20.551 }, 00:07:20.551 { 00:07:20.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.551 "dma_device_type": 2 00:07:20.551 }, 00:07:20.551 { 00:07:20.551 "dma_device_id": "system", 00:07:20.551 "dma_device_type": 1 00:07:20.551 }, 00:07:20.551 { 00:07:20.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.551 "dma_device_type": 2 00:07:20.551 } 00:07:20.551 ], 00:07:20.551 "driver_specific": { 00:07:20.551 "raid": { 00:07:20.551 "uuid": "f15a78f4-4224-11ef-aa83-81fbc7dfef58", 00:07:20.551 "strip_size_kb": 64, 00:07:20.551 "state": "online", 00:07:20.551 "raid_level": "raid0", 00:07:20.551 "superblock": false, 00:07:20.551 "num_base_bdevs": 2, 00:07:20.551 "num_base_bdevs_discovered": 2, 00:07:20.551 "num_base_bdevs_operational": 2, 00:07:20.551 "base_bdevs_list": [ 00:07:20.551 { 00:07:20.551 "name": "BaseBdev1", 00:07:20.551 "uuid": "f018c41f-4224-11ef-aa83-81fbc7dfef58", 00:07:20.551 "is_configured": true, 00:07:20.551 "data_offset": 0, 00:07:20.551 "data_size": 65536 00:07:20.551 }, 00:07:20.551 { 00:07:20.551 "name": "BaseBdev2", 00:07:20.551 "uuid": "f15a716d-4224-11ef-aa83-81fbc7dfef58", 00:07:20.551 "is_configured": true, 00:07:20.551 "data_offset": 0, 00:07:20.551 "data_size": 65536 00:07:20.551 } 00:07:20.551 ] 00:07:20.551 } 00:07:20.551 } 00:07:20.551 }' 00:07:20.551 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:20.551 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:20.551 BaseBdev2' 00:07:20.551 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:20.551 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:20.551 21:06:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:20.809 "name": "BaseBdev1", 00:07:20.809 "aliases": [ 00:07:20.809 "f018c41f-4224-11ef-aa83-81fbc7dfef58" 00:07:20.809 ], 00:07:20.809 "product_name": "Malloc disk", 00:07:20.809 "block_size": 512, 00:07:20.809 "num_blocks": 65536, 00:07:20.809 "uuid": "f018c41f-4224-11ef-aa83-81fbc7dfef58", 00:07:20.809 "assigned_rate_limits": { 00:07:20.809 "rw_ios_per_sec": 0, 00:07:20.809 "rw_mbytes_per_sec": 0, 00:07:20.809 "r_mbytes_per_sec": 0, 00:07:20.809 "w_mbytes_per_sec": 0 00:07:20.809 }, 00:07:20.809 "claimed": true, 00:07:20.809 "claim_type": "exclusive_write", 00:07:20.809 "zoned": false, 00:07:20.809 "supported_io_types": { 00:07:20.809 "read": true, 00:07:20.809 "write": true, 00:07:20.809 "unmap": true, 00:07:20.809 "flush": true, 00:07:20.809 "reset": true, 00:07:20.809 "nvme_admin": false, 00:07:20.809 "nvme_io": false, 00:07:20.809 "nvme_io_md": false, 00:07:20.809 "write_zeroes": true, 00:07:20.809 "zcopy": true, 00:07:20.809 "get_zone_info": false, 00:07:20.809 "zone_management": false, 00:07:20.809 "zone_append": false, 00:07:20.809 "compare": false, 00:07:20.809 "compare_and_write": false, 00:07:20.809 "abort": true, 00:07:20.809 "seek_hole": false, 00:07:20.809 "seek_data": false, 00:07:20.809 "copy": true, 00:07:20.809 "nvme_iov_md": false 00:07:20.809 }, 00:07:20.809 "memory_domains": [ 00:07:20.809 { 00:07:20.809 "dma_device_id": "system", 00:07:20.809 "dma_device_type": 1 00:07:20.809 }, 00:07:20.809 { 00:07:20.809 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:20.809 "dma_device_type": 2 00:07:20.809 } 00:07:20.809 ], 00:07:20.809 "driver_specific": {} 00:07:20.809 }' 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:20.809 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:21.068 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:21.068 "name": "BaseBdev2", 00:07:21.068 "aliases": [ 00:07:21.068 "f15a716d-4224-11ef-aa83-81fbc7dfef58" 00:07:21.068 ], 00:07:21.068 "product_name": "Malloc disk", 00:07:21.068 "block_size": 512, 00:07:21.068 "num_blocks": 65536, 00:07:21.068 "uuid": "f15a716d-4224-11ef-aa83-81fbc7dfef58", 00:07:21.068 "assigned_rate_limits": { 00:07:21.068 "rw_ios_per_sec": 0, 00:07:21.068 "rw_mbytes_per_sec": 0, 00:07:21.068 "r_mbytes_per_sec": 0, 00:07:21.068 "w_mbytes_per_sec": 0 00:07:21.068 }, 00:07:21.068 "claimed": true, 00:07:21.068 "claim_type": "exclusive_write", 00:07:21.068 "zoned": false, 00:07:21.068 "supported_io_types": { 00:07:21.068 "read": true, 00:07:21.068 "write": true, 00:07:21.068 "unmap": true, 00:07:21.068 "flush": true, 00:07:21.068 "reset": true, 00:07:21.068 "nvme_admin": false, 00:07:21.068 "nvme_io": false, 00:07:21.068 "nvme_io_md": false, 00:07:21.068 "write_zeroes": true, 00:07:21.068 "zcopy": true, 00:07:21.068 "get_zone_info": false, 00:07:21.068 "zone_management": false, 00:07:21.068 "zone_append": false, 00:07:21.068 "compare": false, 00:07:21.068 "compare_and_write": false, 00:07:21.068 "abort": true, 00:07:21.068 "seek_hole": false, 00:07:21.068 "seek_data": false, 00:07:21.068 "copy": true, 00:07:21.068 "nvme_iov_md": false 00:07:21.068 }, 00:07:21.068 "memory_domains": [ 00:07:21.068 { 00:07:21.068 "dma_device_id": "system", 00:07:21.068 "dma_device_type": 1 00:07:21.068 }, 00:07:21.068 { 00:07:21.068 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:21.068 "dma_device_type": 2 00:07:21.068 } 00:07:21.068 ], 00:07:21.068 "driver_specific": {} 00:07:21.068 }' 00:07:21.068 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:21.068 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:21.068 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:21.068 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:21.069 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:21.069 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:21.069 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:21.069 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:21.069 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:21.069 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:21.069 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:21.069 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:21.069 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:21.326 [2024-07-14 21:06:32.674809] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:21.326 [2024-07-14 21:06:32.674844] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:21.326 [2024-07-14 21:06:32.674861] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:21.326 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:21.326 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:07:21.326 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:21.326 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:21.326 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:21.326 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:21.326 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:21.327 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:21.327 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:21.327 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:21.327 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:21.327 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:21.327 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:21.327 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:21.327 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:21.327 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:21.327 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:21.585 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:21.585 "name": "Existed_Raid", 00:07:21.585 "uuid": "f15a78f4-4224-11ef-aa83-81fbc7dfef58", 00:07:21.585 "strip_size_kb": 64, 00:07:21.585 "state": "offline", 00:07:21.585 "raid_level": "raid0", 00:07:21.585 "superblock": false, 00:07:21.585 "num_base_bdevs": 2, 00:07:21.585 "num_base_bdevs_discovered": 1, 00:07:21.585 "num_base_bdevs_operational": 1, 00:07:21.585 "base_bdevs_list": [ 00:07:21.585 { 00:07:21.585 "name": null, 00:07:21.585 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:21.585 "is_configured": false, 00:07:21.585 "data_offset": 0, 00:07:21.586 "data_size": 65536 00:07:21.586 }, 00:07:21.586 { 00:07:21.586 "name": "BaseBdev2", 00:07:21.586 "uuid": "f15a716d-4224-11ef-aa83-81fbc7dfef58", 00:07:21.586 "is_configured": true, 00:07:21.586 "data_offset": 0, 00:07:21.586 "data_size": 65536 00:07:21.586 } 00:07:21.586 ] 00:07:21.586 }' 00:07:21.586 21:06:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:21.586 21:06:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:21.844 21:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:21.844 21:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:21.844 21:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:21.844 21:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:22.103 21:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:22.103 21:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:22.103 21:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:22.362 [2024-07-14 21:06:33.659091] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:22.362 [2024-07-14 21:06:33.659128] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2263c2834a00 name Existed_Raid, state offline 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 48623 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 48623 ']' 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 48623 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 48623 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48623' 00:07:22.362 killing process with pid 48623 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 48623 00:07:22.362 [2024-07-14 21:06:33.879260] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:22.362 [2024-07-14 21:06:33.879304] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:22.362 21:06:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 48623 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:07:22.621 00:07:22.621 real 0m7.934s 00:07:22.621 user 0m13.334s 00:07:22.621 sys 0m1.723s 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:22.621 ************************************ 00:07:22.621 END TEST raid_state_function_test 00:07:22.621 ************************************ 00:07:22.621 21:06:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:22.621 21:06:34 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 2 true 00:07:22.621 21:06:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:22.621 21:06:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:22.621 21:06:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:22.621 ************************************ 00:07:22.621 START TEST raid_state_function_test_sb 00:07:22.621 ************************************ 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 2 true 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:22.621 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=48894 00:07:22.622 Process raid pid: 48894 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 48894' 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 48894 /var/tmp/spdk-raid.sock 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 48894 ']' 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:22.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:22.622 21:06:34 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:22.880 [2024-07-14 21:06:34.174593] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:22.880 [2024-07-14 21:06:34.174747] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:23.139 EAL: TSC is not safe to use in SMP mode 00:07:23.139 EAL: TSC is not invariant 00:07:23.398 [2024-07-14 21:06:34.696163] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:23.398 [2024-07-14 21:06:34.790432] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:23.398 [2024-07-14 21:06:34.793164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:23.398 [2024-07-14 21:06:34.794227] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.398 [2024-07-14 21:06:34.794243] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:23.656 21:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:23.656 21:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:07:23.656 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:23.915 [2024-07-14 21:06:35.380247] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:23.915 [2024-07-14 21:06:35.380313] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:23.915 [2024-07-14 21:06:35.380333] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:23.915 [2024-07-14 21:06:35.380341] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:23.915 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:23.915 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:23.915 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:23.915 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:23.915 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:23.915 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:23.915 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:23.915 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:23.915 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:23.915 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:23.915 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:23.915 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:24.177 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:24.177 "name": "Existed_Raid", 00:07:24.177 "uuid": "f42084b3-4224-11ef-aa83-81fbc7dfef58", 00:07:24.177 "strip_size_kb": 64, 00:07:24.177 "state": "configuring", 00:07:24.177 "raid_level": "raid0", 00:07:24.177 "superblock": true, 00:07:24.177 "num_base_bdevs": 2, 00:07:24.177 "num_base_bdevs_discovered": 0, 00:07:24.177 "num_base_bdevs_operational": 2, 00:07:24.177 "base_bdevs_list": [ 00:07:24.177 { 00:07:24.177 "name": "BaseBdev1", 00:07:24.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.177 "is_configured": false, 00:07:24.177 "data_offset": 0, 00:07:24.177 "data_size": 0 00:07:24.177 }, 00:07:24.177 { 00:07:24.178 "name": "BaseBdev2", 00:07:24.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:24.178 "is_configured": false, 00:07:24.178 "data_offset": 0, 00:07:24.178 "data_size": 0 00:07:24.178 } 00:07:24.178 ] 00:07:24.178 }' 00:07:24.178 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:24.178 21:06:35 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:24.439 21:06:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:24.697 [2024-07-14 21:06:36.108264] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:24.697 [2024-07-14 21:06:36.108282] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2d68abe34500 name Existed_Raid, state configuring 00:07:24.697 21:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:24.954 [2024-07-14 21:06:36.300270] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:24.954 [2024-07-14 21:06:36.300317] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:24.954 [2024-07-14 21:06:36.300322] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:24.954 [2024-07-14 21:06:36.300345] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:24.954 21:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:25.212 [2024-07-14 21:06:36.577165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:25.212 BaseBdev1 00:07:25.212 21:06:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:25.212 21:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:07:25.212 21:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:25.212 21:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:07:25.212 21:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:25.212 21:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:25.212 21:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:25.470 21:06:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:25.776 [ 00:07:25.776 { 00:07:25.776 "name": "BaseBdev1", 00:07:25.776 "aliases": [ 00:07:25.776 "f4d70525-4224-11ef-aa83-81fbc7dfef58" 00:07:25.776 ], 00:07:25.776 "product_name": "Malloc disk", 00:07:25.776 "block_size": 512, 00:07:25.776 "num_blocks": 65536, 00:07:25.776 "uuid": "f4d70525-4224-11ef-aa83-81fbc7dfef58", 00:07:25.776 "assigned_rate_limits": { 00:07:25.776 "rw_ios_per_sec": 0, 00:07:25.776 "rw_mbytes_per_sec": 0, 00:07:25.776 "r_mbytes_per_sec": 0, 00:07:25.776 "w_mbytes_per_sec": 0 00:07:25.776 }, 00:07:25.776 "claimed": true, 00:07:25.776 "claim_type": "exclusive_write", 00:07:25.776 "zoned": false, 00:07:25.776 "supported_io_types": { 00:07:25.776 "read": true, 00:07:25.776 "write": true, 00:07:25.776 "unmap": true, 00:07:25.776 "flush": true, 00:07:25.776 "reset": true, 00:07:25.776 "nvme_admin": false, 00:07:25.776 "nvme_io": false, 00:07:25.776 "nvme_io_md": false, 00:07:25.776 "write_zeroes": true, 00:07:25.776 "zcopy": true, 00:07:25.776 "get_zone_info": false, 00:07:25.776 "zone_management": false, 00:07:25.776 "zone_append": false, 00:07:25.776 "compare": false, 00:07:25.776 "compare_and_write": false, 00:07:25.776 "abort": true, 00:07:25.776 "seek_hole": false, 00:07:25.776 "seek_data": false, 00:07:25.776 "copy": true, 00:07:25.776 "nvme_iov_md": false 00:07:25.776 }, 00:07:25.776 "memory_domains": [ 00:07:25.776 { 00:07:25.776 "dma_device_id": "system", 00:07:25.776 "dma_device_type": 1 00:07:25.776 }, 00:07:25.776 { 00:07:25.776 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:25.776 "dma_device_type": 2 00:07:25.776 } 00:07:25.776 ], 00:07:25.776 "driver_specific": {} 00:07:25.776 } 00:07:25.776 ] 00:07:25.776 21:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:07:25.776 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:25.776 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:25.776 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:25.776 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:25.777 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:25.777 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:25.777 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:25.777 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:25.777 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:25.777 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:25.777 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:25.777 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:25.777 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:25.777 "name": "Existed_Raid", 00:07:25.777 "uuid": "f4ace72a-4224-11ef-aa83-81fbc7dfef58", 00:07:25.777 "strip_size_kb": 64, 00:07:25.777 "state": "configuring", 00:07:25.777 "raid_level": "raid0", 00:07:25.777 "superblock": true, 00:07:25.777 "num_base_bdevs": 2, 00:07:25.777 "num_base_bdevs_discovered": 1, 00:07:25.777 "num_base_bdevs_operational": 2, 00:07:25.777 "base_bdevs_list": [ 00:07:25.777 { 00:07:25.777 "name": "BaseBdev1", 00:07:25.777 "uuid": "f4d70525-4224-11ef-aa83-81fbc7dfef58", 00:07:25.777 "is_configured": true, 00:07:25.777 "data_offset": 2048, 00:07:25.777 "data_size": 63488 00:07:25.777 }, 00:07:25.777 { 00:07:25.777 "name": "BaseBdev2", 00:07:25.777 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:25.777 "is_configured": false, 00:07:25.777 "data_offset": 0, 00:07:25.777 "data_size": 0 00:07:25.777 } 00:07:25.777 ] 00:07:25.777 }' 00:07:25.777 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:25.777 21:06:37 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:26.343 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:26.343 [2024-07-14 21:06:37.852455] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:26.343 [2024-07-14 21:06:37.852501] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2d68abe34500 name Existed_Raid, state configuring 00:07:26.343 21:06:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:26.602 [2024-07-14 21:06:38.108496] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:26.602 [2024-07-14 21:06:38.109350] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:26.602 [2024-07-14 21:06:38.109401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:26.602 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:26.602 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:26.602 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 2 00:07:26.602 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:26.602 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:26.602 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:26.602 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:26.602 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:26.602 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:26.602 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:26.602 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:26.602 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:26.602 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:26.602 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:26.860 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:26.860 "name": "Existed_Raid", 00:07:26.860 "uuid": "f5c0d0bc-4224-11ef-aa83-81fbc7dfef58", 00:07:26.860 "strip_size_kb": 64, 00:07:26.860 "state": "configuring", 00:07:26.860 "raid_level": "raid0", 00:07:26.860 "superblock": true, 00:07:26.860 "num_base_bdevs": 2, 00:07:26.860 "num_base_bdevs_discovered": 1, 00:07:26.860 "num_base_bdevs_operational": 2, 00:07:26.860 "base_bdevs_list": [ 00:07:26.860 { 00:07:26.860 "name": "BaseBdev1", 00:07:26.860 "uuid": "f4d70525-4224-11ef-aa83-81fbc7dfef58", 00:07:26.860 "is_configured": true, 00:07:26.860 "data_offset": 2048, 00:07:26.860 "data_size": 63488 00:07:26.860 }, 00:07:26.860 { 00:07:26.860 "name": "BaseBdev2", 00:07:26.860 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:26.860 "is_configured": false, 00:07:26.860 "data_offset": 0, 00:07:26.860 "data_size": 0 00:07:26.860 } 00:07:26.860 ] 00:07:26.860 }' 00:07:26.860 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:26.860 21:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:27.118 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:27.377 [2024-07-14 21:06:38.864657] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:27.377 [2024-07-14 21:06:38.864745] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2d68abe34a00 00:07:27.377 [2024-07-14 21:06:38.864751] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:27.377 [2024-07-14 21:06:38.864770] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2d68abe97e20 00:07:27.377 [2024-07-14 21:06:38.864812] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2d68abe34a00 00:07:27.377 [2024-07-14 21:06:38.864815] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2d68abe34a00 00:07:27.377 [2024-07-14 21:06:38.864834] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:27.377 BaseBdev2 00:07:27.377 21:06:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:27.377 21:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:07:27.377 21:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:27.377 21:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:07:27.377 21:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:27.377 21:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:27.377 21:06:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:27.635 21:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:27.893 [ 00:07:27.893 { 00:07:27.893 "name": "BaseBdev2", 00:07:27.893 "aliases": [ 00:07:27.893 "f6342dee-4224-11ef-aa83-81fbc7dfef58" 00:07:27.893 ], 00:07:27.893 "product_name": "Malloc disk", 00:07:27.893 "block_size": 512, 00:07:27.893 "num_blocks": 65536, 00:07:27.893 "uuid": "f6342dee-4224-11ef-aa83-81fbc7dfef58", 00:07:27.893 "assigned_rate_limits": { 00:07:27.893 "rw_ios_per_sec": 0, 00:07:27.893 "rw_mbytes_per_sec": 0, 00:07:27.893 "r_mbytes_per_sec": 0, 00:07:27.893 "w_mbytes_per_sec": 0 00:07:27.893 }, 00:07:27.893 "claimed": true, 00:07:27.893 "claim_type": "exclusive_write", 00:07:27.893 "zoned": false, 00:07:27.893 "supported_io_types": { 00:07:27.893 "read": true, 00:07:27.893 "write": true, 00:07:27.893 "unmap": true, 00:07:27.893 "flush": true, 00:07:27.893 "reset": true, 00:07:27.893 "nvme_admin": false, 00:07:27.893 "nvme_io": false, 00:07:27.893 "nvme_io_md": false, 00:07:27.893 "write_zeroes": true, 00:07:27.893 "zcopy": true, 00:07:27.893 "get_zone_info": false, 00:07:27.893 "zone_management": false, 00:07:27.893 "zone_append": false, 00:07:27.893 "compare": false, 00:07:27.893 "compare_and_write": false, 00:07:27.893 "abort": true, 00:07:27.893 "seek_hole": false, 00:07:27.893 "seek_data": false, 00:07:27.893 "copy": true, 00:07:27.893 "nvme_iov_md": false 00:07:27.893 }, 00:07:27.893 "memory_domains": [ 00:07:27.893 { 00:07:27.893 "dma_device_id": "system", 00:07:27.893 "dma_device_type": 1 00:07:27.893 }, 00:07:27.893 { 00:07:27.893 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:27.893 "dma_device_type": 2 00:07:27.893 } 00:07:27.893 ], 00:07:27.893 "driver_specific": {} 00:07:27.893 } 00:07:27.893 ] 00:07:27.893 21:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:07:27.893 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:27.893 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:27.893 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 2 00:07:27.893 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:27.893 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:27.893 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:27.893 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:27.893 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:27.893 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:27.893 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:27.893 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:27.893 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:27.893 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:27.893 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:28.151 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:28.151 "name": "Existed_Raid", 00:07:28.151 "uuid": "f5c0d0bc-4224-11ef-aa83-81fbc7dfef58", 00:07:28.151 "strip_size_kb": 64, 00:07:28.151 "state": "online", 00:07:28.151 "raid_level": "raid0", 00:07:28.151 "superblock": true, 00:07:28.151 "num_base_bdevs": 2, 00:07:28.151 "num_base_bdevs_discovered": 2, 00:07:28.151 "num_base_bdevs_operational": 2, 00:07:28.151 "base_bdevs_list": [ 00:07:28.151 { 00:07:28.151 "name": "BaseBdev1", 00:07:28.151 "uuid": "f4d70525-4224-11ef-aa83-81fbc7dfef58", 00:07:28.151 "is_configured": true, 00:07:28.151 "data_offset": 2048, 00:07:28.151 "data_size": 63488 00:07:28.151 }, 00:07:28.151 { 00:07:28.151 "name": "BaseBdev2", 00:07:28.151 "uuid": "f6342dee-4224-11ef-aa83-81fbc7dfef58", 00:07:28.151 "is_configured": true, 00:07:28.151 "data_offset": 2048, 00:07:28.151 "data_size": 63488 00:07:28.151 } 00:07:28.151 ] 00:07:28.151 }' 00:07:28.151 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:28.151 21:06:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:28.409 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:28.410 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:28.410 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:28.410 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:28.410 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:28.410 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:07:28.410 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:28.410 21:06:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:28.668 [2024-07-14 21:06:40.068664] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:28.668 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:28.668 "name": "Existed_Raid", 00:07:28.668 "aliases": [ 00:07:28.668 "f5c0d0bc-4224-11ef-aa83-81fbc7dfef58" 00:07:28.668 ], 00:07:28.668 "product_name": "Raid Volume", 00:07:28.668 "block_size": 512, 00:07:28.668 "num_blocks": 126976, 00:07:28.668 "uuid": "f5c0d0bc-4224-11ef-aa83-81fbc7dfef58", 00:07:28.668 "assigned_rate_limits": { 00:07:28.668 "rw_ios_per_sec": 0, 00:07:28.668 "rw_mbytes_per_sec": 0, 00:07:28.668 "r_mbytes_per_sec": 0, 00:07:28.668 "w_mbytes_per_sec": 0 00:07:28.668 }, 00:07:28.668 "claimed": false, 00:07:28.668 "zoned": false, 00:07:28.668 "supported_io_types": { 00:07:28.668 "read": true, 00:07:28.668 "write": true, 00:07:28.668 "unmap": true, 00:07:28.668 "flush": true, 00:07:28.668 "reset": true, 00:07:28.668 "nvme_admin": false, 00:07:28.668 "nvme_io": false, 00:07:28.668 "nvme_io_md": false, 00:07:28.668 "write_zeroes": true, 00:07:28.668 "zcopy": false, 00:07:28.668 "get_zone_info": false, 00:07:28.668 "zone_management": false, 00:07:28.668 "zone_append": false, 00:07:28.668 "compare": false, 00:07:28.668 "compare_and_write": false, 00:07:28.668 "abort": false, 00:07:28.668 "seek_hole": false, 00:07:28.668 "seek_data": false, 00:07:28.668 "copy": false, 00:07:28.668 "nvme_iov_md": false 00:07:28.668 }, 00:07:28.668 "memory_domains": [ 00:07:28.668 { 00:07:28.668 "dma_device_id": "system", 00:07:28.668 "dma_device_type": 1 00:07:28.668 }, 00:07:28.668 { 00:07:28.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.668 "dma_device_type": 2 00:07:28.668 }, 00:07:28.668 { 00:07:28.668 "dma_device_id": "system", 00:07:28.668 "dma_device_type": 1 00:07:28.668 }, 00:07:28.668 { 00:07:28.668 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.668 "dma_device_type": 2 00:07:28.668 } 00:07:28.668 ], 00:07:28.668 "driver_specific": { 00:07:28.668 "raid": { 00:07:28.668 "uuid": "f5c0d0bc-4224-11ef-aa83-81fbc7dfef58", 00:07:28.668 "strip_size_kb": 64, 00:07:28.668 "state": "online", 00:07:28.668 "raid_level": "raid0", 00:07:28.668 "superblock": true, 00:07:28.668 "num_base_bdevs": 2, 00:07:28.668 "num_base_bdevs_discovered": 2, 00:07:28.668 "num_base_bdevs_operational": 2, 00:07:28.668 "base_bdevs_list": [ 00:07:28.668 { 00:07:28.668 "name": "BaseBdev1", 00:07:28.668 "uuid": "f4d70525-4224-11ef-aa83-81fbc7dfef58", 00:07:28.668 "is_configured": true, 00:07:28.668 "data_offset": 2048, 00:07:28.668 "data_size": 63488 00:07:28.668 }, 00:07:28.668 { 00:07:28.668 "name": "BaseBdev2", 00:07:28.668 "uuid": "f6342dee-4224-11ef-aa83-81fbc7dfef58", 00:07:28.668 "is_configured": true, 00:07:28.668 "data_offset": 2048, 00:07:28.668 "data_size": 63488 00:07:28.668 } 00:07:28.668 ] 00:07:28.668 } 00:07:28.668 } 00:07:28.668 }' 00:07:28.668 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:28.668 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:28.668 BaseBdev2' 00:07:28.668 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:28.668 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:28.668 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:28.927 "name": "BaseBdev1", 00:07:28.927 "aliases": [ 00:07:28.927 "f4d70525-4224-11ef-aa83-81fbc7dfef58" 00:07:28.927 ], 00:07:28.927 "product_name": "Malloc disk", 00:07:28.927 "block_size": 512, 00:07:28.927 "num_blocks": 65536, 00:07:28.927 "uuid": "f4d70525-4224-11ef-aa83-81fbc7dfef58", 00:07:28.927 "assigned_rate_limits": { 00:07:28.927 "rw_ios_per_sec": 0, 00:07:28.927 "rw_mbytes_per_sec": 0, 00:07:28.927 "r_mbytes_per_sec": 0, 00:07:28.927 "w_mbytes_per_sec": 0 00:07:28.927 }, 00:07:28.927 "claimed": true, 00:07:28.927 "claim_type": "exclusive_write", 00:07:28.927 "zoned": false, 00:07:28.927 "supported_io_types": { 00:07:28.927 "read": true, 00:07:28.927 "write": true, 00:07:28.927 "unmap": true, 00:07:28.927 "flush": true, 00:07:28.927 "reset": true, 00:07:28.927 "nvme_admin": false, 00:07:28.927 "nvme_io": false, 00:07:28.927 "nvme_io_md": false, 00:07:28.927 "write_zeroes": true, 00:07:28.927 "zcopy": true, 00:07:28.927 "get_zone_info": false, 00:07:28.927 "zone_management": false, 00:07:28.927 "zone_append": false, 00:07:28.927 "compare": false, 00:07:28.927 "compare_and_write": false, 00:07:28.927 "abort": true, 00:07:28.927 "seek_hole": false, 00:07:28.927 "seek_data": false, 00:07:28.927 "copy": true, 00:07:28.927 "nvme_iov_md": false 00:07:28.927 }, 00:07:28.927 "memory_domains": [ 00:07:28.927 { 00:07:28.927 "dma_device_id": "system", 00:07:28.927 "dma_device_type": 1 00:07:28.927 }, 00:07:28.927 { 00:07:28.927 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:28.927 "dma_device_type": 2 00:07:28.927 } 00:07:28.927 ], 00:07:28.927 "driver_specific": {} 00:07:28.927 }' 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:28.927 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:29.186 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:29.186 "name": "BaseBdev2", 00:07:29.186 "aliases": [ 00:07:29.186 "f6342dee-4224-11ef-aa83-81fbc7dfef58" 00:07:29.186 ], 00:07:29.186 "product_name": "Malloc disk", 00:07:29.186 "block_size": 512, 00:07:29.186 "num_blocks": 65536, 00:07:29.186 "uuid": "f6342dee-4224-11ef-aa83-81fbc7dfef58", 00:07:29.186 "assigned_rate_limits": { 00:07:29.186 "rw_ios_per_sec": 0, 00:07:29.186 "rw_mbytes_per_sec": 0, 00:07:29.186 "r_mbytes_per_sec": 0, 00:07:29.186 "w_mbytes_per_sec": 0 00:07:29.186 }, 00:07:29.186 "claimed": true, 00:07:29.186 "claim_type": "exclusive_write", 00:07:29.186 "zoned": false, 00:07:29.186 "supported_io_types": { 00:07:29.186 "read": true, 00:07:29.186 "write": true, 00:07:29.186 "unmap": true, 00:07:29.186 "flush": true, 00:07:29.186 "reset": true, 00:07:29.186 "nvme_admin": false, 00:07:29.186 "nvme_io": false, 00:07:29.186 "nvme_io_md": false, 00:07:29.186 "write_zeroes": true, 00:07:29.186 "zcopy": true, 00:07:29.186 "get_zone_info": false, 00:07:29.186 "zone_management": false, 00:07:29.186 "zone_append": false, 00:07:29.186 "compare": false, 00:07:29.186 "compare_and_write": false, 00:07:29.186 "abort": true, 00:07:29.186 "seek_hole": false, 00:07:29.186 "seek_data": false, 00:07:29.186 "copy": true, 00:07:29.186 "nvme_iov_md": false 00:07:29.186 }, 00:07:29.186 "memory_domains": [ 00:07:29.186 { 00:07:29.186 "dma_device_id": "system", 00:07:29.186 "dma_device_type": 1 00:07:29.186 }, 00:07:29.186 { 00:07:29.186 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:29.186 "dma_device_type": 2 00:07:29.186 } 00:07:29.186 ], 00:07:29.186 "driver_specific": {} 00:07:29.186 }' 00:07:29.186 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:29.186 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:29.186 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:29.186 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:29.186 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:29.186 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:29.186 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:29.186 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:29.186 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:29.186 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:29.186 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:29.186 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:29.186 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:29.444 [2024-07-14 21:06:40.896593] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:29.444 [2024-07-14 21:06:40.896608] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:29.444 [2024-07-14 21:06:40.896622] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 1 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:29.444 21:06:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:29.703 21:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:29.703 "name": "Existed_Raid", 00:07:29.703 "uuid": "f5c0d0bc-4224-11ef-aa83-81fbc7dfef58", 00:07:29.703 "strip_size_kb": 64, 00:07:29.703 "state": "offline", 00:07:29.703 "raid_level": "raid0", 00:07:29.703 "superblock": true, 00:07:29.703 "num_base_bdevs": 2, 00:07:29.703 "num_base_bdevs_discovered": 1, 00:07:29.703 "num_base_bdevs_operational": 1, 00:07:29.703 "base_bdevs_list": [ 00:07:29.703 { 00:07:29.703 "name": null, 00:07:29.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:29.703 "is_configured": false, 00:07:29.703 "data_offset": 2048, 00:07:29.703 "data_size": 63488 00:07:29.703 }, 00:07:29.703 { 00:07:29.703 "name": "BaseBdev2", 00:07:29.703 "uuid": "f6342dee-4224-11ef-aa83-81fbc7dfef58", 00:07:29.703 "is_configured": true, 00:07:29.703 "data_offset": 2048, 00:07:29.703 "data_size": 63488 00:07:29.703 } 00:07:29.703 ] 00:07:29.703 }' 00:07:29.703 21:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:29.703 21:06:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:29.961 21:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:29.961 21:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:29.961 21:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:29.961 21:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:30.219 21:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:30.219 21:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:30.219 21:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:30.478 [2024-07-14 21:06:41.912487] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:30.478 [2024-07-14 21:06:41.912549] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2d68abe34a00 name Existed_Raid, state offline 00:07:30.478 21:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:30.478 21:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:30.478 21:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:30.478 21:06:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:30.736 21:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:30.736 21:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:30.736 21:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:30.736 21:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 48894 00:07:30.736 21:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 48894 ']' 00:07:30.736 21:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 48894 00:07:30.736 21:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:07:30.736 21:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:30.736 21:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:07:30.736 21:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 48894 00:07:30.736 21:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:30.736 21:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:30.736 21:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 48894' 00:07:30.736 killing process with pid 48894 00:07:30.736 21:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 48894 00:07:30.736 [2024-07-14 21:06:42.200279] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:30.736 21:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 48894 00:07:30.736 [2024-07-14 21:06:42.200331] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:30.995 21:06:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:07:30.995 00:07:30.995 real 0m8.267s 00:07:30.995 user 0m14.235s 00:07:30.995 sys 0m1.475s 00:07:30.995 21:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:30.995 21:06:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:30.995 ************************************ 00:07:30.995 END TEST raid_state_function_test_sb 00:07:30.995 ************************************ 00:07:30.995 21:06:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:30.995 21:06:42 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 2 00:07:30.995 21:06:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:07:30.995 21:06:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:30.995 21:06:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:30.995 ************************************ 00:07:30.995 START TEST raid_superblock_test 00:07:30.995 ************************************ 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 2 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=49164 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 49164 /var/tmp/spdk-raid.sock 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 49164 ']' 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:30.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:30.995 21:06:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:30.995 [2024-07-14 21:06:42.487379] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:30.995 [2024-07-14 21:06:42.487652] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:31.562 EAL: TSC is not safe to use in SMP mode 00:07:31.562 EAL: TSC is not invariant 00:07:31.562 [2024-07-14 21:06:42.989125] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:31.562 [2024-07-14 21:06:43.090811] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:31.562 [2024-07-14 21:06:43.093598] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:31.562 [2024-07-14 21:06:43.094619] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:31.562 [2024-07-14 21:06:43.094638] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:32.129 21:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:32.129 21:06:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:07:32.129 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:07:32.129 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:32.129 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:07:32.129 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:07:32.129 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:07:32.129 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:32.129 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:32.129 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:32.129 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:07:32.386 malloc1 00:07:32.386 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:32.645 [2024-07-14 21:06:43.953332] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:32.645 [2024-07-14 21:06:43.953387] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.645 [2024-07-14 21:06:43.953414] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19aace034780 00:07:32.645 [2024-07-14 21:06:43.953421] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.645 [2024-07-14 21:06:43.954214] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.645 [2024-07-14 21:06:43.954275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:32.645 pt1 00:07:32.645 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:32.645 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:32.645 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:07:32.645 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:07:32.645 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:07:32.645 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:07:32.645 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:07:32.645 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:07:32.645 21:06:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:07:32.904 malloc2 00:07:32.904 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:32.904 [2024-07-14 21:06:44.445347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:32.904 [2024-07-14 21:06:44.445448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:32.904 [2024-07-14 21:06:44.445459] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19aace034c80 00:07:32.904 [2024-07-14 21:06:44.445466] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:32.904 [2024-07-14 21:06:44.446201] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:32.904 [2024-07-14 21:06:44.446231] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:32.904 pt2 00:07:33.163 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:07:33.163 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:07:33.163 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2' -n raid_bdev1 -s 00:07:33.163 [2024-07-14 21:06:44.649371] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:33.163 [2024-07-14 21:06:44.650043] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:33.163 [2024-07-14 21:06:44.650103] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x19aace034f00 00:07:33.163 [2024-07-14 21:06:44.650109] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:33.163 [2024-07-14 21:06:44.650141] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19aace097e20 00:07:33.163 [2024-07-14 21:06:44.650234] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19aace034f00 00:07:33.163 [2024-07-14 21:06:44.650243] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x19aace034f00 00:07:33.163 [2024-07-14 21:06:44.650273] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:33.163 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:33.163 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:33.163 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:33.163 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:33.163 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:33.163 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:33.163 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:33.163 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:33.163 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:33.163 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:33.163 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:33.163 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:33.421 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:33.421 "name": "raid_bdev1", 00:07:33.421 "uuid": "f9a6dfac-4224-11ef-aa83-81fbc7dfef58", 00:07:33.421 "strip_size_kb": 64, 00:07:33.421 "state": "online", 00:07:33.421 "raid_level": "raid0", 00:07:33.421 "superblock": true, 00:07:33.421 "num_base_bdevs": 2, 00:07:33.421 "num_base_bdevs_discovered": 2, 00:07:33.421 "num_base_bdevs_operational": 2, 00:07:33.421 "base_bdevs_list": [ 00:07:33.421 { 00:07:33.421 "name": "pt1", 00:07:33.421 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.421 "is_configured": true, 00:07:33.421 "data_offset": 2048, 00:07:33.421 "data_size": 63488 00:07:33.421 }, 00:07:33.421 { 00:07:33.421 "name": "pt2", 00:07:33.421 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.421 "is_configured": true, 00:07:33.421 "data_offset": 2048, 00:07:33.421 "data_size": 63488 00:07:33.421 } 00:07:33.421 ] 00:07:33.421 }' 00:07:33.421 21:06:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:33.421 21:06:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:33.680 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:07:33.680 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:33.680 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:33.680 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:33.680 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:33.680 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:33.680 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:33.680 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:33.952 [2024-07-14 21:06:45.465424] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:33.952 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:33.952 "name": "raid_bdev1", 00:07:33.952 "aliases": [ 00:07:33.952 "f9a6dfac-4224-11ef-aa83-81fbc7dfef58" 00:07:33.952 ], 00:07:33.952 "product_name": "Raid Volume", 00:07:33.952 "block_size": 512, 00:07:33.952 "num_blocks": 126976, 00:07:33.952 "uuid": "f9a6dfac-4224-11ef-aa83-81fbc7dfef58", 00:07:33.952 "assigned_rate_limits": { 00:07:33.952 "rw_ios_per_sec": 0, 00:07:33.952 "rw_mbytes_per_sec": 0, 00:07:33.952 "r_mbytes_per_sec": 0, 00:07:33.952 "w_mbytes_per_sec": 0 00:07:33.952 }, 00:07:33.952 "claimed": false, 00:07:33.952 "zoned": false, 00:07:33.952 "supported_io_types": { 00:07:33.952 "read": true, 00:07:33.952 "write": true, 00:07:33.952 "unmap": true, 00:07:33.952 "flush": true, 00:07:33.952 "reset": true, 00:07:33.952 "nvme_admin": false, 00:07:33.952 "nvme_io": false, 00:07:33.952 "nvme_io_md": false, 00:07:33.952 "write_zeroes": true, 00:07:33.952 "zcopy": false, 00:07:33.952 "get_zone_info": false, 00:07:33.952 "zone_management": false, 00:07:33.952 "zone_append": false, 00:07:33.952 "compare": false, 00:07:33.952 "compare_and_write": false, 00:07:33.952 "abort": false, 00:07:33.952 "seek_hole": false, 00:07:33.952 "seek_data": false, 00:07:33.952 "copy": false, 00:07:33.952 "nvme_iov_md": false 00:07:33.952 }, 00:07:33.952 "memory_domains": [ 00:07:33.952 { 00:07:33.952 "dma_device_id": "system", 00:07:33.952 "dma_device_type": 1 00:07:33.952 }, 00:07:33.952 { 00:07:33.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.952 "dma_device_type": 2 00:07:33.952 }, 00:07:33.952 { 00:07:33.952 "dma_device_id": "system", 00:07:33.952 "dma_device_type": 1 00:07:33.952 }, 00:07:33.952 { 00:07:33.952 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:33.952 "dma_device_type": 2 00:07:33.952 } 00:07:33.952 ], 00:07:33.952 "driver_specific": { 00:07:33.952 "raid": { 00:07:33.952 "uuid": "f9a6dfac-4224-11ef-aa83-81fbc7dfef58", 00:07:33.952 "strip_size_kb": 64, 00:07:33.952 "state": "online", 00:07:33.952 "raid_level": "raid0", 00:07:33.952 "superblock": true, 00:07:33.952 "num_base_bdevs": 2, 00:07:33.952 "num_base_bdevs_discovered": 2, 00:07:33.952 "num_base_bdevs_operational": 2, 00:07:33.952 "base_bdevs_list": [ 00:07:33.952 { 00:07:33.952 "name": "pt1", 00:07:33.952 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:33.952 "is_configured": true, 00:07:33.952 "data_offset": 2048, 00:07:33.952 "data_size": 63488 00:07:33.952 }, 00:07:33.952 { 00:07:33.952 "name": "pt2", 00:07:33.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:33.952 "is_configured": true, 00:07:33.952 "data_offset": 2048, 00:07:33.952 "data_size": 63488 00:07:33.952 } 00:07:33.952 ] 00:07:33.952 } 00:07:33.952 } 00:07:33.952 }' 00:07:33.952 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:33.952 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:33.952 pt2' 00:07:33.952 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:34.223 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:34.223 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:34.223 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:34.223 "name": "pt1", 00:07:34.223 "aliases": [ 00:07:34.223 "00000000-0000-0000-0000-000000000001" 00:07:34.223 ], 00:07:34.223 "product_name": "passthru", 00:07:34.223 "block_size": 512, 00:07:34.223 "num_blocks": 65536, 00:07:34.223 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:34.223 "assigned_rate_limits": { 00:07:34.223 "rw_ios_per_sec": 0, 00:07:34.223 "rw_mbytes_per_sec": 0, 00:07:34.223 "r_mbytes_per_sec": 0, 00:07:34.223 "w_mbytes_per_sec": 0 00:07:34.223 }, 00:07:34.223 "claimed": true, 00:07:34.223 "claim_type": "exclusive_write", 00:07:34.223 "zoned": false, 00:07:34.224 "supported_io_types": { 00:07:34.224 "read": true, 00:07:34.224 "write": true, 00:07:34.224 "unmap": true, 00:07:34.224 "flush": true, 00:07:34.224 "reset": true, 00:07:34.224 "nvme_admin": false, 00:07:34.224 "nvme_io": false, 00:07:34.224 "nvme_io_md": false, 00:07:34.224 "write_zeroes": true, 00:07:34.224 "zcopy": true, 00:07:34.224 "get_zone_info": false, 00:07:34.224 "zone_management": false, 00:07:34.224 "zone_append": false, 00:07:34.224 "compare": false, 00:07:34.224 "compare_and_write": false, 00:07:34.224 "abort": true, 00:07:34.224 "seek_hole": false, 00:07:34.224 "seek_data": false, 00:07:34.224 "copy": true, 00:07:34.224 "nvme_iov_md": false 00:07:34.224 }, 00:07:34.224 "memory_domains": [ 00:07:34.224 { 00:07:34.224 "dma_device_id": "system", 00:07:34.224 "dma_device_type": 1 00:07:34.224 }, 00:07:34.224 { 00:07:34.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.224 "dma_device_type": 2 00:07:34.224 } 00:07:34.224 ], 00:07:34.224 "driver_specific": { 00:07:34.224 "passthru": { 00:07:34.224 "name": "pt1", 00:07:34.224 "base_bdev_name": "malloc1" 00:07:34.224 } 00:07:34.224 } 00:07:34.224 }' 00:07:34.224 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:34.224 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:34.224 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:34.224 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:34.224 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:34.224 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:34.224 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:34.224 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:34.224 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:34.224 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:34.224 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:34.224 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:34.224 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:34.224 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:34.224 21:06:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:34.790 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:34.790 "name": "pt2", 00:07:34.790 "aliases": [ 00:07:34.790 "00000000-0000-0000-0000-000000000002" 00:07:34.790 ], 00:07:34.790 "product_name": "passthru", 00:07:34.790 "block_size": 512, 00:07:34.790 "num_blocks": 65536, 00:07:34.790 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:34.790 "assigned_rate_limits": { 00:07:34.790 "rw_ios_per_sec": 0, 00:07:34.790 "rw_mbytes_per_sec": 0, 00:07:34.790 "r_mbytes_per_sec": 0, 00:07:34.790 "w_mbytes_per_sec": 0 00:07:34.790 }, 00:07:34.790 "claimed": true, 00:07:34.790 "claim_type": "exclusive_write", 00:07:34.790 "zoned": false, 00:07:34.790 "supported_io_types": { 00:07:34.790 "read": true, 00:07:34.790 "write": true, 00:07:34.790 "unmap": true, 00:07:34.790 "flush": true, 00:07:34.790 "reset": true, 00:07:34.790 "nvme_admin": false, 00:07:34.790 "nvme_io": false, 00:07:34.790 "nvme_io_md": false, 00:07:34.790 "write_zeroes": true, 00:07:34.790 "zcopy": true, 00:07:34.790 "get_zone_info": false, 00:07:34.790 "zone_management": false, 00:07:34.790 "zone_append": false, 00:07:34.790 "compare": false, 00:07:34.790 "compare_and_write": false, 00:07:34.790 "abort": true, 00:07:34.790 "seek_hole": false, 00:07:34.790 "seek_data": false, 00:07:34.790 "copy": true, 00:07:34.790 "nvme_iov_md": false 00:07:34.790 }, 00:07:34.790 "memory_domains": [ 00:07:34.790 { 00:07:34.790 "dma_device_id": "system", 00:07:34.790 "dma_device_type": 1 00:07:34.790 }, 00:07:34.790 { 00:07:34.790 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:34.790 "dma_device_type": 2 00:07:34.790 } 00:07:34.790 ], 00:07:34.790 "driver_specific": { 00:07:34.790 "passthru": { 00:07:34.790 "name": "pt2", 00:07:34.790 "base_bdev_name": "malloc2" 00:07:34.790 } 00:07:34.790 } 00:07:34.790 }' 00:07:34.790 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:34.790 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:34.790 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:34.790 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:34.790 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:34.790 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:34.790 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:34.790 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:34.790 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:34.790 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:34.790 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:34.790 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:34.790 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:34.790 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:07:35.048 [2024-07-14 21:06:46.353468] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:35.048 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=f9a6dfac-4224-11ef-aa83-81fbc7dfef58 00:07:35.048 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z f9a6dfac-4224-11ef-aa83-81fbc7dfef58 ']' 00:07:35.048 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:35.306 [2024-07-14 21:06:46.617454] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:35.306 [2024-07-14 21:06:46.617470] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:35.306 [2024-07-14 21:06:46.617512] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:35.306 [2024-07-14 21:06:46.617523] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:35.306 [2024-07-14 21:06:46.617527] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19aace034f00 name raid_bdev1, state offline 00:07:35.306 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:35.306 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:07:35.565 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:07:35.565 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:07:35.565 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:35.565 21:06:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:07:35.824 21:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:07:35.824 21:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:07:36.083 21:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:07:36.083 21:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2' -n raid_bdev1 00:07:36.342 [2024-07-14 21:06:47.861608] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:07:36.342 [2024-07-14 21:06:47.862270] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:07:36.342 [2024-07-14 21:06:47.862297] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:07:36.342 [2024-07-14 21:06:47.862349] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:07:36.342 [2024-07-14 21:06:47.862359] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:36.342 [2024-07-14 21:06:47.862363] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19aace034c80 name raid_bdev1, state configuring 00:07:36.342 request: 00:07:36.342 { 00:07:36.342 "name": "raid_bdev1", 00:07:36.342 "raid_level": "raid0", 00:07:36.342 "base_bdevs": [ 00:07:36.342 "malloc1", 00:07:36.342 "malloc2" 00:07:36.342 ], 00:07:36.342 "strip_size_kb": 64, 00:07:36.342 "superblock": false, 00:07:36.342 "method": "bdev_raid_create", 00:07:36.342 "req_id": 1 00:07:36.342 } 00:07:36.342 Got JSON-RPC error response 00:07:36.342 response: 00:07:36.342 { 00:07:36.342 "code": -17, 00:07:36.342 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:07:36.342 } 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:07:36.342 21:06:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:36.601 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:07:36.601 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:07:36.601 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:07:36.860 [2024-07-14 21:06:48.281628] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:07:36.860 [2024-07-14 21:06:48.281684] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:36.860 [2024-07-14 21:06:48.281711] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19aace034780 00:07:36.860 [2024-07-14 21:06:48.281718] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:36.860 [2024-07-14 21:06:48.282493] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:36.860 [2024-07-14 21:06:48.282565] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:07:36.860 [2024-07-14 21:06:48.282621] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:07:36.860 [2024-07-14 21:06:48.282648] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:07:36.860 pt1 00:07:36.860 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 2 00:07:36.860 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:36.860 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:36.860 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:36.860 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:36.860 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:36.860 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:36.860 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:36.860 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:36.860 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:36.860 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:36.860 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.119 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:37.119 "name": "raid_bdev1", 00:07:37.119 "uuid": "f9a6dfac-4224-11ef-aa83-81fbc7dfef58", 00:07:37.119 "strip_size_kb": 64, 00:07:37.119 "state": "configuring", 00:07:37.119 "raid_level": "raid0", 00:07:37.119 "superblock": true, 00:07:37.119 "num_base_bdevs": 2, 00:07:37.119 "num_base_bdevs_discovered": 1, 00:07:37.119 "num_base_bdevs_operational": 2, 00:07:37.119 "base_bdevs_list": [ 00:07:37.119 { 00:07:37.119 "name": "pt1", 00:07:37.119 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.119 "is_configured": true, 00:07:37.119 "data_offset": 2048, 00:07:37.119 "data_size": 63488 00:07:37.119 }, 00:07:37.119 { 00:07:37.119 "name": null, 00:07:37.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.119 "is_configured": false, 00:07:37.119 "data_offset": 2048, 00:07:37.119 "data_size": 63488 00:07:37.119 } 00:07:37.119 ] 00:07:37.119 }' 00:07:37.119 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:37.119 21:06:48 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:37.378 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:07:37.378 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:07:37.378 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:37.378 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:07:37.637 [2024-07-14 21:06:48.981667] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:07:37.637 [2024-07-14 21:06:48.981743] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:37.637 [2024-07-14 21:06:48.981758] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x19aace034f00 00:07:37.637 [2024-07-14 21:06:48.981766] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:37.637 [2024-07-14 21:06:48.981918] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:37.637 [2024-07-14 21:06:48.981941] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:07:37.637 [2024-07-14 21:06:48.981971] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:07:37.637 [2024-07-14 21:06:48.981980] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:07:37.637 [2024-07-14 21:06:48.982034] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x19aace035180 00:07:37.637 [2024-07-14 21:06:48.982039] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:37.637 [2024-07-14 21:06:48.982058] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x19aace097e20 00:07:37.637 [2024-07-14 21:06:48.982134] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x19aace035180 00:07:37.637 [2024-07-14 21:06:48.982139] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x19aace035180 00:07:37.637 [2024-07-14 21:06:48.982161] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:37.637 pt2 00:07:37.637 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:07:37.637 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:07:37.637 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:37.637 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:37.637 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:37.637 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:37.637 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:37.637 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:37.637 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:37.637 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:37.637 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:37.637 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:37.637 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:37.637 21:06:48 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:37.897 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:37.897 "name": "raid_bdev1", 00:07:37.897 "uuid": "f9a6dfac-4224-11ef-aa83-81fbc7dfef58", 00:07:37.897 "strip_size_kb": 64, 00:07:37.897 "state": "online", 00:07:37.897 "raid_level": "raid0", 00:07:37.897 "superblock": true, 00:07:37.897 "num_base_bdevs": 2, 00:07:37.897 "num_base_bdevs_discovered": 2, 00:07:37.897 "num_base_bdevs_operational": 2, 00:07:37.897 "base_bdevs_list": [ 00:07:37.897 { 00:07:37.897 "name": "pt1", 00:07:37.897 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:37.897 "is_configured": true, 00:07:37.897 "data_offset": 2048, 00:07:37.897 "data_size": 63488 00:07:37.897 }, 00:07:37.897 { 00:07:37.897 "name": "pt2", 00:07:37.897 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:37.897 "is_configured": true, 00:07:37.897 "data_offset": 2048, 00:07:37.897 "data_size": 63488 00:07:37.897 } 00:07:37.897 ] 00:07:37.897 }' 00:07:37.897 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:37.897 21:06:49 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:38.156 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:07:38.156 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:07:38.156 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:38.156 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:38.156 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:38.156 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:38.156 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:38.156 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:38.415 [2024-07-14 21:06:49.709769] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:38.415 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:38.415 "name": "raid_bdev1", 00:07:38.415 "aliases": [ 00:07:38.415 "f9a6dfac-4224-11ef-aa83-81fbc7dfef58" 00:07:38.415 ], 00:07:38.415 "product_name": "Raid Volume", 00:07:38.415 "block_size": 512, 00:07:38.415 "num_blocks": 126976, 00:07:38.415 "uuid": "f9a6dfac-4224-11ef-aa83-81fbc7dfef58", 00:07:38.415 "assigned_rate_limits": { 00:07:38.415 "rw_ios_per_sec": 0, 00:07:38.415 "rw_mbytes_per_sec": 0, 00:07:38.415 "r_mbytes_per_sec": 0, 00:07:38.415 "w_mbytes_per_sec": 0 00:07:38.415 }, 00:07:38.415 "claimed": false, 00:07:38.415 "zoned": false, 00:07:38.415 "supported_io_types": { 00:07:38.415 "read": true, 00:07:38.415 "write": true, 00:07:38.415 "unmap": true, 00:07:38.415 "flush": true, 00:07:38.415 "reset": true, 00:07:38.415 "nvme_admin": false, 00:07:38.415 "nvme_io": false, 00:07:38.415 "nvme_io_md": false, 00:07:38.415 "write_zeroes": true, 00:07:38.415 "zcopy": false, 00:07:38.415 "get_zone_info": false, 00:07:38.415 "zone_management": false, 00:07:38.415 "zone_append": false, 00:07:38.415 "compare": false, 00:07:38.415 "compare_and_write": false, 00:07:38.415 "abort": false, 00:07:38.415 "seek_hole": false, 00:07:38.415 "seek_data": false, 00:07:38.415 "copy": false, 00:07:38.415 "nvme_iov_md": false 00:07:38.415 }, 00:07:38.415 "memory_domains": [ 00:07:38.415 { 00:07:38.415 "dma_device_id": "system", 00:07:38.415 "dma_device_type": 1 00:07:38.415 }, 00:07:38.415 { 00:07:38.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.415 "dma_device_type": 2 00:07:38.415 }, 00:07:38.415 { 00:07:38.415 "dma_device_id": "system", 00:07:38.415 "dma_device_type": 1 00:07:38.415 }, 00:07:38.415 { 00:07:38.415 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.415 "dma_device_type": 2 00:07:38.415 } 00:07:38.415 ], 00:07:38.415 "driver_specific": { 00:07:38.415 "raid": { 00:07:38.415 "uuid": "f9a6dfac-4224-11ef-aa83-81fbc7dfef58", 00:07:38.415 "strip_size_kb": 64, 00:07:38.415 "state": "online", 00:07:38.415 "raid_level": "raid0", 00:07:38.415 "superblock": true, 00:07:38.415 "num_base_bdevs": 2, 00:07:38.416 "num_base_bdevs_discovered": 2, 00:07:38.416 "num_base_bdevs_operational": 2, 00:07:38.416 "base_bdevs_list": [ 00:07:38.416 { 00:07:38.416 "name": "pt1", 00:07:38.416 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.416 "is_configured": true, 00:07:38.416 "data_offset": 2048, 00:07:38.416 "data_size": 63488 00:07:38.416 }, 00:07:38.416 { 00:07:38.416 "name": "pt2", 00:07:38.416 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.416 "is_configured": true, 00:07:38.416 "data_offset": 2048, 00:07:38.416 "data_size": 63488 00:07:38.416 } 00:07:38.416 ] 00:07:38.416 } 00:07:38.416 } 00:07:38.416 }' 00:07:38.416 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:38.416 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:07:38.416 pt2' 00:07:38.416 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:38.416 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:07:38.416 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:38.674 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:38.674 "name": "pt1", 00:07:38.674 "aliases": [ 00:07:38.674 "00000000-0000-0000-0000-000000000001" 00:07:38.674 ], 00:07:38.674 "product_name": "passthru", 00:07:38.674 "block_size": 512, 00:07:38.674 "num_blocks": 65536, 00:07:38.674 "uuid": "00000000-0000-0000-0000-000000000001", 00:07:38.674 "assigned_rate_limits": { 00:07:38.674 "rw_ios_per_sec": 0, 00:07:38.674 "rw_mbytes_per_sec": 0, 00:07:38.674 "r_mbytes_per_sec": 0, 00:07:38.674 "w_mbytes_per_sec": 0 00:07:38.674 }, 00:07:38.674 "claimed": true, 00:07:38.674 "claim_type": "exclusive_write", 00:07:38.674 "zoned": false, 00:07:38.674 "supported_io_types": { 00:07:38.674 "read": true, 00:07:38.674 "write": true, 00:07:38.674 "unmap": true, 00:07:38.674 "flush": true, 00:07:38.674 "reset": true, 00:07:38.674 "nvme_admin": false, 00:07:38.674 "nvme_io": false, 00:07:38.674 "nvme_io_md": false, 00:07:38.674 "write_zeroes": true, 00:07:38.674 "zcopy": true, 00:07:38.674 "get_zone_info": false, 00:07:38.674 "zone_management": false, 00:07:38.674 "zone_append": false, 00:07:38.674 "compare": false, 00:07:38.674 "compare_and_write": false, 00:07:38.674 "abort": true, 00:07:38.674 "seek_hole": false, 00:07:38.674 "seek_data": false, 00:07:38.674 "copy": true, 00:07:38.674 "nvme_iov_md": false 00:07:38.674 }, 00:07:38.674 "memory_domains": [ 00:07:38.674 { 00:07:38.674 "dma_device_id": "system", 00:07:38.674 "dma_device_type": 1 00:07:38.674 }, 00:07:38.674 { 00:07:38.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.674 "dma_device_type": 2 00:07:38.674 } 00:07:38.674 ], 00:07:38.674 "driver_specific": { 00:07:38.674 "passthru": { 00:07:38.674 "name": "pt1", 00:07:38.674 "base_bdev_name": "malloc1" 00:07:38.674 } 00:07:38.674 } 00:07:38.674 }' 00:07:38.674 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:38.674 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:38.674 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:38.674 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:38.674 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:38.674 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:38.674 21:06:49 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:38.674 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:38.674 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:38.674 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:38.674 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:38.674 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:38.674 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:38.674 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:07:38.674 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:38.934 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:38.934 "name": "pt2", 00:07:38.934 "aliases": [ 00:07:38.934 "00000000-0000-0000-0000-000000000002" 00:07:38.934 ], 00:07:38.934 "product_name": "passthru", 00:07:38.934 "block_size": 512, 00:07:38.934 "num_blocks": 65536, 00:07:38.934 "uuid": "00000000-0000-0000-0000-000000000002", 00:07:38.934 "assigned_rate_limits": { 00:07:38.934 "rw_ios_per_sec": 0, 00:07:38.934 "rw_mbytes_per_sec": 0, 00:07:38.934 "r_mbytes_per_sec": 0, 00:07:38.934 "w_mbytes_per_sec": 0 00:07:38.934 }, 00:07:38.934 "claimed": true, 00:07:38.934 "claim_type": "exclusive_write", 00:07:38.934 "zoned": false, 00:07:38.934 "supported_io_types": { 00:07:38.934 "read": true, 00:07:38.934 "write": true, 00:07:38.934 "unmap": true, 00:07:38.934 "flush": true, 00:07:38.934 "reset": true, 00:07:38.934 "nvme_admin": false, 00:07:38.934 "nvme_io": false, 00:07:38.934 "nvme_io_md": false, 00:07:38.934 "write_zeroes": true, 00:07:38.934 "zcopy": true, 00:07:38.934 "get_zone_info": false, 00:07:38.934 "zone_management": false, 00:07:38.934 "zone_append": false, 00:07:38.934 "compare": false, 00:07:38.934 "compare_and_write": false, 00:07:38.934 "abort": true, 00:07:38.934 "seek_hole": false, 00:07:38.934 "seek_data": false, 00:07:38.934 "copy": true, 00:07:38.934 "nvme_iov_md": false 00:07:38.934 }, 00:07:38.934 "memory_domains": [ 00:07:38.934 { 00:07:38.934 "dma_device_id": "system", 00:07:38.934 "dma_device_type": 1 00:07:38.934 }, 00:07:38.934 { 00:07:38.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:38.934 "dma_device_type": 2 00:07:38.934 } 00:07:38.934 ], 00:07:38.934 "driver_specific": { 00:07:38.934 "passthru": { 00:07:38.934 "name": "pt2", 00:07:38.934 "base_bdev_name": "malloc2" 00:07:38.934 } 00:07:38.934 } 00:07:38.934 }' 00:07:38.934 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:38.934 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:38.934 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:38.934 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:38.934 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:38.934 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:38.934 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:38.934 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:38.934 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:38.934 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:38.934 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:38.934 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:38.934 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:07:38.934 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:07:39.194 [2024-07-14 21:06:50.633803] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' f9a6dfac-4224-11ef-aa83-81fbc7dfef58 '!=' f9a6dfac-4224-11ef-aa83-81fbc7dfef58 ']' 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 49164 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 49164 ']' 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 49164 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 49164 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:39.194 killing process with pid 49164 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49164' 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 49164 00:07:39.194 [2024-07-14 21:06:50.660105] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:39.194 [2024-07-14 21:06:50.660128] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:39.194 [2024-07-14 21:06:50.660143] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:39.194 [2024-07-14 21:06:50.660147] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x19aace035180 name raid_bdev1, state offline 00:07:39.194 21:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 49164 00:07:39.194 [2024-07-14 21:06:50.676650] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:39.453 21:06:50 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:07:39.453 00:07:39.453 real 0m8.425s 00:07:39.453 user 0m14.634s 00:07:39.453 sys 0m1.424s 00:07:39.453 21:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:39.453 21:06:50 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.453 ************************************ 00:07:39.453 END TEST raid_superblock_test 00:07:39.453 ************************************ 00:07:39.453 21:06:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:39.453 21:06:50 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 2 read 00:07:39.453 21:06:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:39.453 21:06:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:39.453 21:06:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:39.453 ************************************ 00:07:39.453 START TEST raid_read_error_test 00:07:39.453 ************************************ 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 read 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.tINRIAEXwB 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49429 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49429 /var/tmp/spdk-raid.sock 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 49429 ']' 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:39.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:39.453 21:06:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:39.453 [2024-07-14 21:06:50.971638] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:39.453 [2024-07-14 21:06:50.971888] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:40.021 EAL: TSC is not safe to use in SMP mode 00:07:40.021 EAL: TSC is not invariant 00:07:40.021 [2024-07-14 21:06:51.502004] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:40.280 [2024-07-14 21:06:51.600242] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:40.280 [2024-07-14 21:06:51.602949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:40.280 [2024-07-14 21:06:51.604030] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.280 [2024-07-14 21:06:51.604050] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:40.539 21:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:40.539 21:06:52 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:07:40.539 21:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:40.539 21:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:40.797 BaseBdev1_malloc 00:07:40.797 21:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:41.056 true 00:07:41.056 21:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:41.314 [2024-07-14 21:06:52.693590] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:41.314 [2024-07-14 21:06:52.693670] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.314 [2024-07-14 21:06:52.693727] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x117379e34780 00:07:41.314 [2024-07-14 21:06:52.693735] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.314 [2024-07-14 21:06:52.694559] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.314 [2024-07-14 21:06:52.694617] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:41.314 BaseBdev1 00:07:41.314 21:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:41.314 21:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:41.572 BaseBdev2_malloc 00:07:41.572 21:06:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:41.831 true 00:07:41.831 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:41.831 [2024-07-14 21:06:53.357617] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:41.831 [2024-07-14 21:06:53.357669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:41.831 [2024-07-14 21:06:53.357704] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x117379e34c80 00:07:41.831 [2024-07-14 21:06:53.357711] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:41.831 [2024-07-14 21:06:53.358325] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:41.831 [2024-07-14 21:06:53.358353] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:41.831 BaseBdev2 00:07:41.831 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:42.091 [2024-07-14 21:06:53.569656] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:42.091 [2024-07-14 21:06:53.570243] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:42.091 [2024-07-14 21:06:53.570327] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x117379e34f00 00:07:42.091 [2024-07-14 21:06:53.570334] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:42.091 [2024-07-14 21:06:53.570362] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x117379ea0e20 00:07:42.091 [2024-07-14 21:06:53.570499] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x117379e34f00 00:07:42.091 [2024-07-14 21:06:53.570507] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x117379e34f00 00:07:42.091 [2024-07-14 21:06:53.570533] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:42.091 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:42.091 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:42.091 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:42.091 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:42.091 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:42.091 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:42.091 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:42.091 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:42.091 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:42.091 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:42.091 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:42.091 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:42.349 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:42.349 "name": "raid_bdev1", 00:07:42.349 "uuid": "fef8006b-4224-11ef-aa83-81fbc7dfef58", 00:07:42.349 "strip_size_kb": 64, 00:07:42.349 "state": "online", 00:07:42.349 "raid_level": "raid0", 00:07:42.349 "superblock": true, 00:07:42.349 "num_base_bdevs": 2, 00:07:42.349 "num_base_bdevs_discovered": 2, 00:07:42.349 "num_base_bdevs_operational": 2, 00:07:42.349 "base_bdevs_list": [ 00:07:42.349 { 00:07:42.349 "name": "BaseBdev1", 00:07:42.350 "uuid": "f2186c26-3e94-4d55-94d5-29d00fad5a81", 00:07:42.350 "is_configured": true, 00:07:42.350 "data_offset": 2048, 00:07:42.350 "data_size": 63488 00:07:42.350 }, 00:07:42.350 { 00:07:42.350 "name": "BaseBdev2", 00:07:42.350 "uuid": "30d8d771-9a82-e25f-a06a-db90329e0724", 00:07:42.350 "is_configured": true, 00:07:42.350 "data_offset": 2048, 00:07:42.350 "data_size": 63488 00:07:42.350 } 00:07:42.350 ] 00:07:42.350 }' 00:07:42.350 21:06:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:42.350 21:06:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:42.608 21:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:42.608 21:06:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:42.608 [2024-07-14 21:06:54.141905] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x117379ea0ec0 00:07:43.544 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:44.112 "name": "raid_bdev1", 00:07:44.112 "uuid": "fef8006b-4224-11ef-aa83-81fbc7dfef58", 00:07:44.112 "strip_size_kb": 64, 00:07:44.112 "state": "online", 00:07:44.112 "raid_level": "raid0", 00:07:44.112 "superblock": true, 00:07:44.112 "num_base_bdevs": 2, 00:07:44.112 "num_base_bdevs_discovered": 2, 00:07:44.112 "num_base_bdevs_operational": 2, 00:07:44.112 "base_bdevs_list": [ 00:07:44.112 { 00:07:44.112 "name": "BaseBdev1", 00:07:44.112 "uuid": "f2186c26-3e94-4d55-94d5-29d00fad5a81", 00:07:44.112 "is_configured": true, 00:07:44.112 "data_offset": 2048, 00:07:44.112 "data_size": 63488 00:07:44.112 }, 00:07:44.112 { 00:07:44.112 "name": "BaseBdev2", 00:07:44.112 "uuid": "30d8d771-9a82-e25f-a06a-db90329e0724", 00:07:44.112 "is_configured": true, 00:07:44.112 "data_offset": 2048, 00:07:44.112 "data_size": 63488 00:07:44.112 } 00:07:44.112 ] 00:07:44.112 }' 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:44.112 21:06:55 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.371 21:06:55 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:44.632 [2024-07-14 21:06:56.162397] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:44.632 [2024-07-14 21:06:56.162422] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:44.632 [2024-07-14 21:06:56.162741] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:44.632 [2024-07-14 21:06:56.162749] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:44.632 [2024-07-14 21:06:56.162755] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:44.632 [2024-07-14 21:06:56.162758] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x117379e34f00 name raid_bdev1, state offline 00:07:44.632 0 00:07:44.632 21:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49429 00:07:44.632 21:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 49429 ']' 00:07:44.632 21:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 49429 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 49429 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:44.891 killing process with pid 49429 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49429' 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 49429 00:07:44.891 [2024-07-14 21:06:56.191442] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 49429 00:07:44.891 [2024-07-14 21:06:56.201420] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.tINRIAEXwB 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:07:44.891 00:07:44.891 real 0m5.416s 00:07:44.891 user 0m8.139s 00:07:44.891 sys 0m1.001s 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:44.891 21:06:56 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.891 ************************************ 00:07:44.891 END TEST raid_read_error_test 00:07:44.891 ************************************ 00:07:44.891 21:06:56 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:44.891 21:06:56 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 2 write 00:07:44.891 21:06:56 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:44.891 21:06:56 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:44.891 21:06:56 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:44.891 ************************************ 00:07:44.891 START TEST raid_write_error_test 00:07:44.891 ************************************ 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 2 write 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.8o0eSIoVIO 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=49553 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 49553 /var/tmp/spdk-raid.sock 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 49553 ']' 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:44.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:44.891 21:06:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:44.891 [2024-07-14 21:06:56.435948] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:44.891 [2024-07-14 21:06:56.436122] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:45.457 EAL: TSC is not safe to use in SMP mode 00:07:45.457 EAL: TSC is not invariant 00:07:45.457 [2024-07-14 21:06:56.942291] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:45.716 [2024-07-14 21:06:57.022718] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:45.716 [2024-07-14 21:06:57.025058] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:45.716 [2024-07-14 21:06:57.025981] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.716 [2024-07-14 21:06:57.025996] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:45.974 21:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:45.974 21:06:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:07:45.974 21:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:45.974 21:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:07:46.232 BaseBdev1_malloc 00:07:46.232 21:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:07:46.490 true 00:07:46.490 21:06:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:07:46.747 [2024-07-14 21:06:58.177058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:07:46.747 [2024-07-14 21:06:58.177121] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:46.747 [2024-07-14 21:06:58.177159] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34bdd6e34780 00:07:46.747 [2024-07-14 21:06:58.177167] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:46.747 [2024-07-14 21:06:58.177845] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:46.747 [2024-07-14 21:06:58.177874] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:07:46.747 BaseBdev1 00:07:46.747 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:07:46.747 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:07:47.005 BaseBdev2_malloc 00:07:47.005 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:07:47.262 true 00:07:47.262 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:07:47.262 [2024-07-14 21:06:58.757123] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:07:47.262 [2024-07-14 21:06:58.757203] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:07:47.262 [2024-07-14 21:06:58.757237] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x34bdd6e34c80 00:07:47.262 [2024-07-14 21:06:58.757246] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:07:47.262 [2024-07-14 21:06:58.758186] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:07:47.262 [2024-07-14 21:06:58.758221] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:07:47.262 BaseBdev2 00:07:47.262 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:07:47.520 [2024-07-14 21:06:58.953140] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:47.520 [2024-07-14 21:06:58.953885] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:47.520 [2024-07-14 21:06:58.953980] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x34bdd6e34f00 00:07:47.520 [2024-07-14 21:06:58.953989] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:07:47.520 [2024-07-14 21:06:58.954025] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x34bdd6ea0e20 00:07:47.520 [2024-07-14 21:06:58.954150] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x34bdd6e34f00 00:07:47.520 [2024-07-14 21:06:58.954155] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x34bdd6e34f00 00:07:47.520 [2024-07-14 21:06:58.954186] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:47.520 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:47.520 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:47.520 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:47.520 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:47.520 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:47.520 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:47.521 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:47.521 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:47.521 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:47.521 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:47.521 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:47.521 21:06:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:47.779 21:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:47.779 "name": "raid_bdev1", 00:07:47.779 "uuid": "022d748e-4225-11ef-aa83-81fbc7dfef58", 00:07:47.779 "strip_size_kb": 64, 00:07:47.779 "state": "online", 00:07:47.779 "raid_level": "raid0", 00:07:47.779 "superblock": true, 00:07:47.779 "num_base_bdevs": 2, 00:07:47.779 "num_base_bdevs_discovered": 2, 00:07:47.779 "num_base_bdevs_operational": 2, 00:07:47.779 "base_bdevs_list": [ 00:07:47.779 { 00:07:47.779 "name": "BaseBdev1", 00:07:47.779 "uuid": "df7c72c5-dc40-0154-a762-fdcbbc07e205", 00:07:47.779 "is_configured": true, 00:07:47.779 "data_offset": 2048, 00:07:47.779 "data_size": 63488 00:07:47.779 }, 00:07:47.779 { 00:07:47.779 "name": "BaseBdev2", 00:07:47.779 "uuid": "ec52ec57-ae72-795e-8a7a-5ab9262bae5d", 00:07:47.779 "is_configured": true, 00:07:47.779 "data_offset": 2048, 00:07:47.779 "data_size": 63488 00:07:47.779 } 00:07:47.779 ] 00:07:47.779 }' 00:07:47.779 21:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:47.779 21:06:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:48.037 21:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:07:48.037 21:06:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:07:48.296 [2024-07-14 21:06:59.601348] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x34bdd6ea0ec0 00:07:49.231 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:07:49.490 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:07:49.490 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:07:49.490 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:07:49.490 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 2 00:07:49.490 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:07:49.490 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:49.490 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:07:49.490 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:49.490 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:49.490 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:49.490 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:49.490 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:49.490 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:49.490 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:49.490 21:07:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:07:49.490 21:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:49.490 "name": "raid_bdev1", 00:07:49.490 "uuid": "022d748e-4225-11ef-aa83-81fbc7dfef58", 00:07:49.490 "strip_size_kb": 64, 00:07:49.490 "state": "online", 00:07:49.490 "raid_level": "raid0", 00:07:49.490 "superblock": true, 00:07:49.490 "num_base_bdevs": 2, 00:07:49.490 "num_base_bdevs_discovered": 2, 00:07:49.490 "num_base_bdevs_operational": 2, 00:07:49.490 "base_bdevs_list": [ 00:07:49.490 { 00:07:49.490 "name": "BaseBdev1", 00:07:49.490 "uuid": "df7c72c5-dc40-0154-a762-fdcbbc07e205", 00:07:49.490 "is_configured": true, 00:07:49.490 "data_offset": 2048, 00:07:49.490 "data_size": 63488 00:07:49.490 }, 00:07:49.490 { 00:07:49.490 "name": "BaseBdev2", 00:07:49.490 "uuid": "ec52ec57-ae72-795e-8a7a-5ab9262bae5d", 00:07:49.490 "is_configured": true, 00:07:49.490 "data_offset": 2048, 00:07:49.490 "data_size": 63488 00:07:49.490 } 00:07:49.490 ] 00:07:49.490 }' 00:07:49.490 21:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:49.490 21:07:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.057 21:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:07:50.057 [2024-07-14 21:07:01.522647] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:07:50.057 [2024-07-14 21:07:01.522702] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:50.057 [2024-07-14 21:07:01.523113] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:50.057 [2024-07-14 21:07:01.523136] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:50.057 [2024-07-14 21:07:01.523143] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:07:50.057 [2024-07-14 21:07:01.523148] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x34bdd6e34f00 name raid_bdev1, state offline 00:07:50.057 0 00:07:50.057 21:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 49553 00:07:50.057 21:07:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 49553 ']' 00:07:50.057 21:07:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 49553 00:07:50.057 21:07:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:07:50.057 21:07:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:50.057 21:07:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 49553 00:07:50.057 21:07:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:07:50.057 21:07:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:07:50.057 killing process with pid 49553 00:07:50.057 21:07:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:07:50.057 21:07:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49553' 00:07:50.057 21:07:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 49553 00:07:50.057 [2024-07-14 21:07:01.548438] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:50.057 21:07:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 49553 00:07:50.057 [2024-07-14 21:07:01.565128] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:50.316 21:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.8o0eSIoVIO 00:07:50.316 21:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:07:50.316 21:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:07:50.316 21:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:07:50.316 21:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:07:50.316 21:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:50.316 21:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:50.316 21:07:01 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:07:50.316 00:07:50.316 real 0m5.386s 00:07:50.316 user 0m8.099s 00:07:50.316 sys 0m0.896s 00:07:50.316 21:07:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:50.316 21:07:01 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.316 ************************************ 00:07:50.316 END TEST raid_write_error_test 00:07:50.316 ************************************ 00:07:50.316 21:07:01 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:50.316 21:07:01 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:07:50.316 21:07:01 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 2 false 00:07:50.316 21:07:01 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:50.316 21:07:01 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:50.316 21:07:01 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:50.316 ************************************ 00:07:50.316 START TEST raid_state_function_test 00:07:50.316 ************************************ 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 false 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:50.316 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:50.317 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:50.317 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:50.317 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:50.317 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:07:50.317 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:07:50.317 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=49675 00:07:50.317 Process raid pid: 49675 00:07:50.317 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49675' 00:07:50.317 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 49675 /var/tmp/spdk-raid.sock 00:07:50.317 21:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 49675 ']' 00:07:50.317 21:07:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:50.317 21:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:50.317 21:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:50.317 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:50.317 21:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:50.317 21:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:50.317 21:07:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:50.575 [2024-07-14 21:07:01.865584] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:50.575 [2024-07-14 21:07:01.865788] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:51.141 EAL: TSC is not safe to use in SMP mode 00:07:51.141 EAL: TSC is not invariant 00:07:51.141 [2024-07-14 21:07:02.416032] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:51.141 [2024-07-14 21:07:02.507979] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:51.141 [2024-07-14 21:07:02.510335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:51.141 [2024-07-14 21:07:02.511225] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.141 [2024-07-14 21:07:02.511254] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:51.399 21:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:51.399 21:07:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:07:51.399 21:07:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:51.658 [2024-07-14 21:07:03.146454] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:51.658 [2024-07-14 21:07:03.146498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:51.658 [2024-07-14 21:07:03.146503] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:51.658 [2024-07-14 21:07:03.146527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:51.658 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:51.658 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:51.658 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:51.658 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:51.658 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:51.658 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:51.658 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:51.658 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:51.658 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:51.658 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:51.658 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:51.658 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:51.915 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:51.915 "name": "Existed_Raid", 00:07:51.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.915 "strip_size_kb": 64, 00:07:51.915 "state": "configuring", 00:07:51.915 "raid_level": "concat", 00:07:51.915 "superblock": false, 00:07:51.915 "num_base_bdevs": 2, 00:07:51.915 "num_base_bdevs_discovered": 0, 00:07:51.915 "num_base_bdevs_operational": 2, 00:07:51.915 "base_bdevs_list": [ 00:07:51.915 { 00:07:51.915 "name": "BaseBdev1", 00:07:51.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.915 "is_configured": false, 00:07:51.915 "data_offset": 0, 00:07:51.915 "data_size": 0 00:07:51.915 }, 00:07:51.915 { 00:07:51.915 "name": "BaseBdev2", 00:07:51.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:51.915 "is_configured": false, 00:07:51.915 "data_offset": 0, 00:07:51.915 "data_size": 0 00:07:51.915 } 00:07:51.915 ] 00:07:51.915 }' 00:07:51.915 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:51.915 21:07:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:52.173 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:52.430 [2024-07-14 21:07:03.898427] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:52.430 [2024-07-14 21:07:03.898446] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3a2aeac34500 name Existed_Raid, state configuring 00:07:52.430 21:07:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:52.688 [2024-07-14 21:07:04.166451] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:07:52.688 [2024-07-14 21:07:04.166498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:07:52.688 [2024-07-14 21:07:04.166503] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:52.688 [2024-07-14 21:07:04.166527] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:52.688 21:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:07:52.945 [2024-07-14 21:07:04.427600] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:52.945 BaseBdev1 00:07:52.945 21:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:07:52.945 21:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:07:52.945 21:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:52.945 21:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:52.945 21:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:52.945 21:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:52.945 21:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:53.203 21:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:07:53.461 [ 00:07:53.461 { 00:07:53.461 "name": "BaseBdev1", 00:07:53.461 "aliases": [ 00:07:53.461 "0570a176-4225-11ef-aa83-81fbc7dfef58" 00:07:53.461 ], 00:07:53.461 "product_name": "Malloc disk", 00:07:53.461 "block_size": 512, 00:07:53.461 "num_blocks": 65536, 00:07:53.461 "uuid": "0570a176-4225-11ef-aa83-81fbc7dfef58", 00:07:53.461 "assigned_rate_limits": { 00:07:53.461 "rw_ios_per_sec": 0, 00:07:53.461 "rw_mbytes_per_sec": 0, 00:07:53.461 "r_mbytes_per_sec": 0, 00:07:53.461 "w_mbytes_per_sec": 0 00:07:53.461 }, 00:07:53.461 "claimed": true, 00:07:53.461 "claim_type": "exclusive_write", 00:07:53.461 "zoned": false, 00:07:53.461 "supported_io_types": { 00:07:53.461 "read": true, 00:07:53.461 "write": true, 00:07:53.461 "unmap": true, 00:07:53.461 "flush": true, 00:07:53.461 "reset": true, 00:07:53.461 "nvme_admin": false, 00:07:53.461 "nvme_io": false, 00:07:53.461 "nvme_io_md": false, 00:07:53.461 "write_zeroes": true, 00:07:53.461 "zcopy": true, 00:07:53.461 "get_zone_info": false, 00:07:53.461 "zone_management": false, 00:07:53.461 "zone_append": false, 00:07:53.462 "compare": false, 00:07:53.462 "compare_and_write": false, 00:07:53.462 "abort": true, 00:07:53.462 "seek_hole": false, 00:07:53.462 "seek_data": false, 00:07:53.462 "copy": true, 00:07:53.462 "nvme_iov_md": false 00:07:53.462 }, 00:07:53.462 "memory_domains": [ 00:07:53.462 { 00:07:53.462 "dma_device_id": "system", 00:07:53.462 "dma_device_type": 1 00:07:53.462 }, 00:07:53.462 { 00:07:53.462 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:53.462 "dma_device_type": 2 00:07:53.462 } 00:07:53.462 ], 00:07:53.462 "driver_specific": {} 00:07:53.462 } 00:07:53.462 ] 00:07:53.462 21:07:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:53.462 21:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:53.462 21:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:53.462 21:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:53.462 21:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:53.462 21:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:53.462 21:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:53.462 21:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:53.462 21:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:53.462 21:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:53.462 21:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:53.462 21:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:53.462 21:07:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:53.720 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:53.720 "name": "Existed_Raid", 00:07:53.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.720 "strip_size_kb": 64, 00:07:53.720 "state": "configuring", 00:07:53.720 "raid_level": "concat", 00:07:53.720 "superblock": false, 00:07:53.720 "num_base_bdevs": 2, 00:07:53.720 "num_base_bdevs_discovered": 1, 00:07:53.720 "num_base_bdevs_operational": 2, 00:07:53.720 "base_bdevs_list": [ 00:07:53.720 { 00:07:53.720 "name": "BaseBdev1", 00:07:53.720 "uuid": "0570a176-4225-11ef-aa83-81fbc7dfef58", 00:07:53.720 "is_configured": true, 00:07:53.720 "data_offset": 0, 00:07:53.720 "data_size": 65536 00:07:53.720 }, 00:07:53.720 { 00:07:53.720 "name": "BaseBdev2", 00:07:53.720 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:53.720 "is_configured": false, 00:07:53.720 "data_offset": 0, 00:07:53.720 "data_size": 0 00:07:53.720 } 00:07:53.720 ] 00:07:53.720 }' 00:07:53.720 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:53.720 21:07:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:53.978 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:07:54.236 [2024-07-14 21:07:05.706697] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:07:54.236 [2024-07-14 21:07:05.706740] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3a2aeac34500 name Existed_Raid, state configuring 00:07:54.236 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:07:54.494 [2024-07-14 21:07:05.910713] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:07:54.494 [2024-07-14 21:07:05.911556] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:07:54.494 [2024-07-14 21:07:05.911607] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:07:54.494 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:07:54.494 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:54.494 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:07:54.494 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:54.494 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:07:54.494 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:54.494 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:54.494 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:54.494 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:54.494 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:54.494 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:54.494 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:54.494 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:54.494 21:07:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:54.753 21:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:54.753 "name": "Existed_Raid", 00:07:54.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.753 "strip_size_kb": 64, 00:07:54.753 "state": "configuring", 00:07:54.753 "raid_level": "concat", 00:07:54.753 "superblock": false, 00:07:54.753 "num_base_bdevs": 2, 00:07:54.753 "num_base_bdevs_discovered": 1, 00:07:54.753 "num_base_bdevs_operational": 2, 00:07:54.753 "base_bdevs_list": [ 00:07:54.753 { 00:07:54.753 "name": "BaseBdev1", 00:07:54.753 "uuid": "0570a176-4225-11ef-aa83-81fbc7dfef58", 00:07:54.753 "is_configured": true, 00:07:54.753 "data_offset": 0, 00:07:54.753 "data_size": 65536 00:07:54.753 }, 00:07:54.753 { 00:07:54.753 "name": "BaseBdev2", 00:07:54.753 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:54.753 "is_configured": false, 00:07:54.753 "data_offset": 0, 00:07:54.753 "data_size": 0 00:07:54.753 } 00:07:54.753 ] 00:07:54.753 }' 00:07:54.753 21:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:54.753 21:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:55.011 21:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:07:55.268 [2024-07-14 21:07:06.638913] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:07:55.268 [2024-07-14 21:07:06.638939] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3a2aeac34a00 00:07:55.268 [2024-07-14 21:07:06.638959] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 131072, blocklen 512 00:07:55.268 [2024-07-14 21:07:06.638979] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3a2aeac97e20 00:07:55.268 [2024-07-14 21:07:06.639063] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3a2aeac34a00 00:07:55.268 [2024-07-14 21:07:06.639068] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3a2aeac34a00 00:07:55.268 [2024-07-14 21:07:06.639098] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:07:55.268 BaseBdev2 00:07:55.268 21:07:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:07:55.268 21:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:07:55.268 21:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:07:55.268 21:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:07:55.268 21:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:07:55.268 21:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:07:55.268 21:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:07:55.527 21:07:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:07:55.785 [ 00:07:55.785 { 00:07:55.785 "name": "BaseBdev2", 00:07:55.785 "aliases": [ 00:07:55.785 "06c23194-4225-11ef-aa83-81fbc7dfef58" 00:07:55.785 ], 00:07:55.785 "product_name": "Malloc disk", 00:07:55.785 "block_size": 512, 00:07:55.785 "num_blocks": 65536, 00:07:55.785 "uuid": "06c23194-4225-11ef-aa83-81fbc7dfef58", 00:07:55.785 "assigned_rate_limits": { 00:07:55.785 "rw_ios_per_sec": 0, 00:07:55.785 "rw_mbytes_per_sec": 0, 00:07:55.785 "r_mbytes_per_sec": 0, 00:07:55.785 "w_mbytes_per_sec": 0 00:07:55.785 }, 00:07:55.785 "claimed": true, 00:07:55.785 "claim_type": "exclusive_write", 00:07:55.785 "zoned": false, 00:07:55.785 "supported_io_types": { 00:07:55.785 "read": true, 00:07:55.785 "write": true, 00:07:55.785 "unmap": true, 00:07:55.785 "flush": true, 00:07:55.785 "reset": true, 00:07:55.785 "nvme_admin": false, 00:07:55.785 "nvme_io": false, 00:07:55.785 "nvme_io_md": false, 00:07:55.785 "write_zeroes": true, 00:07:55.785 "zcopy": true, 00:07:55.785 "get_zone_info": false, 00:07:55.785 "zone_management": false, 00:07:55.785 "zone_append": false, 00:07:55.785 "compare": false, 00:07:55.785 "compare_and_write": false, 00:07:55.785 "abort": true, 00:07:55.785 "seek_hole": false, 00:07:55.785 "seek_data": false, 00:07:55.785 "copy": true, 00:07:55.785 "nvme_iov_md": false 00:07:55.785 }, 00:07:55.785 "memory_domains": [ 00:07:55.785 { 00:07:55.785 "dma_device_id": "system", 00:07:55.785 "dma_device_type": 1 00:07:55.785 }, 00:07:55.785 { 00:07:55.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:55.785 "dma_device_type": 2 00:07:55.785 } 00:07:55.785 ], 00:07:55.785 "driver_specific": {} 00:07:55.785 } 00:07:55.785 ] 00:07:55.785 21:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:07:55.785 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:07:55.785 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:07:55.785 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:07:55.785 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:55.785 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:07:55.785 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:55.785 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:55.785 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:07:55.785 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:55.785 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:55.785 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:55.785 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:55.785 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:55.785 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:56.043 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:56.043 "name": "Existed_Raid", 00:07:56.043 "uuid": "06c23889-4225-11ef-aa83-81fbc7dfef58", 00:07:56.043 "strip_size_kb": 64, 00:07:56.043 "state": "online", 00:07:56.043 "raid_level": "concat", 00:07:56.043 "superblock": false, 00:07:56.043 "num_base_bdevs": 2, 00:07:56.043 "num_base_bdevs_discovered": 2, 00:07:56.043 "num_base_bdevs_operational": 2, 00:07:56.043 "base_bdevs_list": [ 00:07:56.043 { 00:07:56.043 "name": "BaseBdev1", 00:07:56.043 "uuid": "0570a176-4225-11ef-aa83-81fbc7dfef58", 00:07:56.043 "is_configured": true, 00:07:56.043 "data_offset": 0, 00:07:56.043 "data_size": 65536 00:07:56.043 }, 00:07:56.043 { 00:07:56.043 "name": "BaseBdev2", 00:07:56.043 "uuid": "06c23194-4225-11ef-aa83-81fbc7dfef58", 00:07:56.043 "is_configured": true, 00:07:56.043 "data_offset": 0, 00:07:56.043 "data_size": 65536 00:07:56.043 } 00:07:56.043 ] 00:07:56.043 }' 00:07:56.043 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:56.043 21:07:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:56.301 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:07:56.301 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:07:56.301 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:07:56.301 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:07:56.301 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:07:56.301 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:07:56.301 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:07:56.301 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:07:56.559 [2024-07-14 21:07:07.914980] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:07:56.559 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:07:56.559 "name": "Existed_Raid", 00:07:56.559 "aliases": [ 00:07:56.559 "06c23889-4225-11ef-aa83-81fbc7dfef58" 00:07:56.559 ], 00:07:56.559 "product_name": "Raid Volume", 00:07:56.559 "block_size": 512, 00:07:56.559 "num_blocks": 131072, 00:07:56.559 "uuid": "06c23889-4225-11ef-aa83-81fbc7dfef58", 00:07:56.559 "assigned_rate_limits": { 00:07:56.559 "rw_ios_per_sec": 0, 00:07:56.559 "rw_mbytes_per_sec": 0, 00:07:56.559 "r_mbytes_per_sec": 0, 00:07:56.559 "w_mbytes_per_sec": 0 00:07:56.559 }, 00:07:56.559 "claimed": false, 00:07:56.559 "zoned": false, 00:07:56.559 "supported_io_types": { 00:07:56.559 "read": true, 00:07:56.559 "write": true, 00:07:56.559 "unmap": true, 00:07:56.559 "flush": true, 00:07:56.559 "reset": true, 00:07:56.559 "nvme_admin": false, 00:07:56.559 "nvme_io": false, 00:07:56.559 "nvme_io_md": false, 00:07:56.559 "write_zeroes": true, 00:07:56.559 "zcopy": false, 00:07:56.559 "get_zone_info": false, 00:07:56.559 "zone_management": false, 00:07:56.559 "zone_append": false, 00:07:56.559 "compare": false, 00:07:56.559 "compare_and_write": false, 00:07:56.559 "abort": false, 00:07:56.559 "seek_hole": false, 00:07:56.559 "seek_data": false, 00:07:56.559 "copy": false, 00:07:56.559 "nvme_iov_md": false 00:07:56.559 }, 00:07:56.559 "memory_domains": [ 00:07:56.559 { 00:07:56.559 "dma_device_id": "system", 00:07:56.559 "dma_device_type": 1 00:07:56.559 }, 00:07:56.559 { 00:07:56.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.559 "dma_device_type": 2 00:07:56.559 }, 00:07:56.559 { 00:07:56.559 "dma_device_id": "system", 00:07:56.559 "dma_device_type": 1 00:07:56.559 }, 00:07:56.559 { 00:07:56.559 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.559 "dma_device_type": 2 00:07:56.559 } 00:07:56.559 ], 00:07:56.559 "driver_specific": { 00:07:56.559 "raid": { 00:07:56.559 "uuid": "06c23889-4225-11ef-aa83-81fbc7dfef58", 00:07:56.559 "strip_size_kb": 64, 00:07:56.559 "state": "online", 00:07:56.559 "raid_level": "concat", 00:07:56.559 "superblock": false, 00:07:56.559 "num_base_bdevs": 2, 00:07:56.559 "num_base_bdevs_discovered": 2, 00:07:56.559 "num_base_bdevs_operational": 2, 00:07:56.559 "base_bdevs_list": [ 00:07:56.559 { 00:07:56.559 "name": "BaseBdev1", 00:07:56.559 "uuid": "0570a176-4225-11ef-aa83-81fbc7dfef58", 00:07:56.559 "is_configured": true, 00:07:56.559 "data_offset": 0, 00:07:56.559 "data_size": 65536 00:07:56.559 }, 00:07:56.559 { 00:07:56.559 "name": "BaseBdev2", 00:07:56.559 "uuid": "06c23194-4225-11ef-aa83-81fbc7dfef58", 00:07:56.559 "is_configured": true, 00:07:56.559 "data_offset": 0, 00:07:56.559 "data_size": 65536 00:07:56.559 } 00:07:56.559 ] 00:07:56.559 } 00:07:56.559 } 00:07:56.559 }' 00:07:56.559 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:07:56.559 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:07:56.559 BaseBdev2' 00:07:56.559 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:56.559 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:07:56.559 21:07:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:56.817 "name": "BaseBdev1", 00:07:56.817 "aliases": [ 00:07:56.817 "0570a176-4225-11ef-aa83-81fbc7dfef58" 00:07:56.817 ], 00:07:56.817 "product_name": "Malloc disk", 00:07:56.817 "block_size": 512, 00:07:56.817 "num_blocks": 65536, 00:07:56.817 "uuid": "0570a176-4225-11ef-aa83-81fbc7dfef58", 00:07:56.817 "assigned_rate_limits": { 00:07:56.817 "rw_ios_per_sec": 0, 00:07:56.817 "rw_mbytes_per_sec": 0, 00:07:56.817 "r_mbytes_per_sec": 0, 00:07:56.817 "w_mbytes_per_sec": 0 00:07:56.817 }, 00:07:56.817 "claimed": true, 00:07:56.817 "claim_type": "exclusive_write", 00:07:56.817 "zoned": false, 00:07:56.817 "supported_io_types": { 00:07:56.817 "read": true, 00:07:56.817 "write": true, 00:07:56.817 "unmap": true, 00:07:56.817 "flush": true, 00:07:56.817 "reset": true, 00:07:56.817 "nvme_admin": false, 00:07:56.817 "nvme_io": false, 00:07:56.817 "nvme_io_md": false, 00:07:56.817 "write_zeroes": true, 00:07:56.817 "zcopy": true, 00:07:56.817 "get_zone_info": false, 00:07:56.817 "zone_management": false, 00:07:56.817 "zone_append": false, 00:07:56.817 "compare": false, 00:07:56.817 "compare_and_write": false, 00:07:56.817 "abort": true, 00:07:56.817 "seek_hole": false, 00:07:56.817 "seek_data": false, 00:07:56.817 "copy": true, 00:07:56.817 "nvme_iov_md": false 00:07:56.817 }, 00:07:56.817 "memory_domains": [ 00:07:56.817 { 00:07:56.817 "dma_device_id": "system", 00:07:56.817 "dma_device_type": 1 00:07:56.817 }, 00:07:56.817 { 00:07:56.817 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:56.817 "dma_device_type": 2 00:07:56.817 } 00:07:56.817 ], 00:07:56.817 "driver_specific": {} 00:07:56.817 }' 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:07:56.817 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:07:57.075 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:07:57.075 "name": "BaseBdev2", 00:07:57.075 "aliases": [ 00:07:57.075 "06c23194-4225-11ef-aa83-81fbc7dfef58" 00:07:57.075 ], 00:07:57.075 "product_name": "Malloc disk", 00:07:57.075 "block_size": 512, 00:07:57.075 "num_blocks": 65536, 00:07:57.075 "uuid": "06c23194-4225-11ef-aa83-81fbc7dfef58", 00:07:57.075 "assigned_rate_limits": { 00:07:57.075 "rw_ios_per_sec": 0, 00:07:57.075 "rw_mbytes_per_sec": 0, 00:07:57.075 "r_mbytes_per_sec": 0, 00:07:57.075 "w_mbytes_per_sec": 0 00:07:57.075 }, 00:07:57.075 "claimed": true, 00:07:57.075 "claim_type": "exclusive_write", 00:07:57.075 "zoned": false, 00:07:57.075 "supported_io_types": { 00:07:57.075 "read": true, 00:07:57.075 "write": true, 00:07:57.075 "unmap": true, 00:07:57.075 "flush": true, 00:07:57.075 "reset": true, 00:07:57.075 "nvme_admin": false, 00:07:57.075 "nvme_io": false, 00:07:57.075 "nvme_io_md": false, 00:07:57.075 "write_zeroes": true, 00:07:57.075 "zcopy": true, 00:07:57.075 "get_zone_info": false, 00:07:57.075 "zone_management": false, 00:07:57.075 "zone_append": false, 00:07:57.075 "compare": false, 00:07:57.075 "compare_and_write": false, 00:07:57.075 "abort": true, 00:07:57.075 "seek_hole": false, 00:07:57.075 "seek_data": false, 00:07:57.075 "copy": true, 00:07:57.075 "nvme_iov_md": false 00:07:57.075 }, 00:07:57.075 "memory_domains": [ 00:07:57.075 { 00:07:57.075 "dma_device_id": "system", 00:07:57.075 "dma_device_type": 1 00:07:57.075 }, 00:07:57.075 { 00:07:57.075 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:07:57.075 "dma_device_type": 2 00:07:57.075 } 00:07:57.075 ], 00:07:57.075 "driver_specific": {} 00:07:57.075 }' 00:07:57.075 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:57.075 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:07:57.075 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:07:57.075 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:57.075 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:07:57.075 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:07:57.075 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:57.075 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:07:57.075 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:07:57.075 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:57.075 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:07:57.075 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:07:57.075 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:07:57.333 [2024-07-14 21:07:08.867093] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:07:57.333 [2024-07-14 21:07:08.867116] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:07:57.333 [2024-07-14 21:07:08.867131] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:57.593 21:07:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:07:57.883 21:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:07:57.883 "name": "Existed_Raid", 00:07:57.883 "uuid": "06c23889-4225-11ef-aa83-81fbc7dfef58", 00:07:57.883 "strip_size_kb": 64, 00:07:57.883 "state": "offline", 00:07:57.883 "raid_level": "concat", 00:07:57.883 "superblock": false, 00:07:57.883 "num_base_bdevs": 2, 00:07:57.883 "num_base_bdevs_discovered": 1, 00:07:57.883 "num_base_bdevs_operational": 1, 00:07:57.883 "base_bdevs_list": [ 00:07:57.883 { 00:07:57.883 "name": null, 00:07:57.883 "uuid": "00000000-0000-0000-0000-000000000000", 00:07:57.883 "is_configured": false, 00:07:57.883 "data_offset": 0, 00:07:57.883 "data_size": 65536 00:07:57.883 }, 00:07:57.883 { 00:07:57.883 "name": "BaseBdev2", 00:07:57.883 "uuid": "06c23194-4225-11ef-aa83-81fbc7dfef58", 00:07:57.883 "is_configured": true, 00:07:57.883 "data_offset": 0, 00:07:57.883 "data_size": 65536 00:07:57.883 } 00:07:57.883 ] 00:07:57.883 }' 00:07:57.883 21:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:07:57.883 21:07:09 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:58.141 21:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:07:58.141 21:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:58.141 21:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:58.141 21:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:07:58.400 21:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:07:58.400 21:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:07:58.400 21:07:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:07:58.658 [2024-07-14 21:07:10.073696] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:07:58.658 [2024-07-14 21:07:10.073770] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3a2aeac34a00 name Existed_Raid, state offline 00:07:58.658 21:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:07:58.658 21:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:07:58.658 21:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:07:58.658 21:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:07:58.915 21:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:07:58.915 21:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:07:58.915 21:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:07:58.915 21:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 49675 00:07:58.915 21:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 49675 ']' 00:07:58.915 21:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 49675 00:07:58.915 21:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:07:58.915 21:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:07:58.915 21:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 49675 00:07:58.915 21:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:07:58.915 21:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:07:58.915 21:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:07:58.915 killing process with pid 49675 00:07:58.915 21:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49675' 00:07:58.915 21:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 49675 00:07:58.915 [2024-07-14 21:07:10.398092] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:07:58.915 [2024-07-14 21:07:10.398127] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:07:58.915 21:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 49675 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:07:59.174 00:07:59.174 real 0m8.739s 00:07:59.174 user 0m15.152s 00:07:59.174 sys 0m1.545s 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:59.174 ************************************ 00:07:59.174 END TEST raid_state_function_test 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:07:59.174 ************************************ 00:07:59.174 21:07:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:07:59.174 21:07:10 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 2 true 00:07:59.174 21:07:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:07:59.174 21:07:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:59.174 21:07:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:07:59.174 ************************************ 00:07:59.174 START TEST raid_state_function_test_sb 00:07:59.174 ************************************ 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 2 true 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=49946 00:07:59.174 Process raid pid: 49946 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 49946' 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 49946 /var/tmp/spdk-raid.sock 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 49946 ']' 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:59.174 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:59.174 21:07:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:07:59.174 [2024-07-14 21:07:10.655106] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:07:59.174 [2024-07-14 21:07:10.655253] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:07:59.740 EAL: TSC is not safe to use in SMP mode 00:07:59.740 EAL: TSC is not invariant 00:07:59.740 [2024-07-14 21:07:11.233287] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.999 [2024-07-14 21:07:11.334201] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:07:59.999 [2024-07-14 21:07:11.336581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:59.999 [2024-07-14 21:07:11.337517] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:07:59.999 [2024-07-14 21:07:11.337547] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:00.257 21:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:00.257 21:07:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:08:00.257 21:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:00.517 [2024-07-14 21:07:11.967650] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:00.517 [2024-07-14 21:07:11.967705] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:00.517 [2024-07-14 21:07:11.967710] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:00.517 [2024-07-14 21:07:11.967733] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:00.517 21:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:00.517 21:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:00.517 21:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:00.517 21:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:00.517 21:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:00.517 21:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:00.517 21:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:00.517 21:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:00.517 21:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:00.517 21:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:00.517 21:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:00.517 21:07:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:00.776 21:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:00.776 "name": "Existed_Raid", 00:08:00.776 "uuid": "09ef5020-4225-11ef-aa83-81fbc7dfef58", 00:08:00.776 "strip_size_kb": 64, 00:08:00.776 "state": "configuring", 00:08:00.776 "raid_level": "concat", 00:08:00.776 "superblock": true, 00:08:00.776 "num_base_bdevs": 2, 00:08:00.776 "num_base_bdevs_discovered": 0, 00:08:00.776 "num_base_bdevs_operational": 2, 00:08:00.776 "base_bdevs_list": [ 00:08:00.776 { 00:08:00.776 "name": "BaseBdev1", 00:08:00.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.776 "is_configured": false, 00:08:00.776 "data_offset": 0, 00:08:00.776 "data_size": 0 00:08:00.776 }, 00:08:00.776 { 00:08:00.776 "name": "BaseBdev2", 00:08:00.776 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:00.776 "is_configured": false, 00:08:00.776 "data_offset": 0, 00:08:00.776 "data_size": 0 00:08:00.776 } 00:08:00.776 ] 00:08:00.776 }' 00:08:00.776 21:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:00.776 21:07:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:01.345 21:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:01.345 [2024-07-14 21:07:12.839731] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:01.345 [2024-07-14 21:07:12.839755] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x36f2f5e34500 name Existed_Raid, state configuring 00:08:01.345 21:07:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:01.603 [2024-07-14 21:07:13.103731] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:01.603 [2024-07-14 21:07:13.103771] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:01.603 [2024-07-14 21:07:13.103791] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:01.603 [2024-07-14 21:07:13.103798] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:01.603 21:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:01.860 [2024-07-14 21:07:13.364657] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:01.860 BaseBdev1 00:08:01.860 21:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:01.860 21:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:01.860 21:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:01.860 21:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:01.860 21:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:01.860 21:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:01.860 21:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:02.117 21:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:02.375 [ 00:08:02.375 { 00:08:02.375 "name": "BaseBdev1", 00:08:02.375 "aliases": [ 00:08:02.375 "0ac4577b-4225-11ef-aa83-81fbc7dfef58" 00:08:02.375 ], 00:08:02.375 "product_name": "Malloc disk", 00:08:02.375 "block_size": 512, 00:08:02.375 "num_blocks": 65536, 00:08:02.375 "uuid": "0ac4577b-4225-11ef-aa83-81fbc7dfef58", 00:08:02.375 "assigned_rate_limits": { 00:08:02.375 "rw_ios_per_sec": 0, 00:08:02.375 "rw_mbytes_per_sec": 0, 00:08:02.375 "r_mbytes_per_sec": 0, 00:08:02.375 "w_mbytes_per_sec": 0 00:08:02.375 }, 00:08:02.375 "claimed": true, 00:08:02.375 "claim_type": "exclusive_write", 00:08:02.375 "zoned": false, 00:08:02.375 "supported_io_types": { 00:08:02.375 "read": true, 00:08:02.375 "write": true, 00:08:02.375 "unmap": true, 00:08:02.375 "flush": true, 00:08:02.375 "reset": true, 00:08:02.375 "nvme_admin": false, 00:08:02.375 "nvme_io": false, 00:08:02.375 "nvme_io_md": false, 00:08:02.375 "write_zeroes": true, 00:08:02.375 "zcopy": true, 00:08:02.375 "get_zone_info": false, 00:08:02.375 "zone_management": false, 00:08:02.375 "zone_append": false, 00:08:02.375 "compare": false, 00:08:02.375 "compare_and_write": false, 00:08:02.375 "abort": true, 00:08:02.375 "seek_hole": false, 00:08:02.375 "seek_data": false, 00:08:02.375 "copy": true, 00:08:02.375 "nvme_iov_md": false 00:08:02.375 }, 00:08:02.375 "memory_domains": [ 00:08:02.375 { 00:08:02.375 "dma_device_id": "system", 00:08:02.375 "dma_device_type": 1 00:08:02.375 }, 00:08:02.375 { 00:08:02.375 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:02.375 "dma_device_type": 2 00:08:02.375 } 00:08:02.375 ], 00:08:02.375 "driver_specific": {} 00:08:02.375 } 00:08:02.375 ] 00:08:02.375 21:07:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:02.375 21:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:02.375 21:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:02.375 21:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:02.376 21:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:02.376 21:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:02.376 21:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:02.376 21:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:02.376 21:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:02.376 21:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:02.376 21:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:02.376 21:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:02.376 21:07:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:02.635 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:02.635 "name": "Existed_Raid", 00:08:02.635 "uuid": "0a9caa51-4225-11ef-aa83-81fbc7dfef58", 00:08:02.635 "strip_size_kb": 64, 00:08:02.635 "state": "configuring", 00:08:02.635 "raid_level": "concat", 00:08:02.635 "superblock": true, 00:08:02.635 "num_base_bdevs": 2, 00:08:02.635 "num_base_bdevs_discovered": 1, 00:08:02.635 "num_base_bdevs_operational": 2, 00:08:02.635 "base_bdevs_list": [ 00:08:02.635 { 00:08:02.635 "name": "BaseBdev1", 00:08:02.635 "uuid": "0ac4577b-4225-11ef-aa83-81fbc7dfef58", 00:08:02.635 "is_configured": true, 00:08:02.635 "data_offset": 2048, 00:08:02.635 "data_size": 63488 00:08:02.635 }, 00:08:02.635 { 00:08:02.635 "name": "BaseBdev2", 00:08:02.635 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:02.635 "is_configured": false, 00:08:02.635 "data_offset": 0, 00:08:02.635 "data_size": 0 00:08:02.635 } 00:08:02.635 ] 00:08:02.635 }' 00:08:02.635 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:02.635 21:07:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:02.893 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:03.152 [2024-07-14 21:07:14.651725] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:03.152 [2024-07-14 21:07:14.651775] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x36f2f5e34500 name Existed_Raid, state configuring 00:08:03.152 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:03.411 [2024-07-14 21:07:14.923774] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:03.412 [2024-07-14 21:07:14.924703] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:03.412 [2024-07-14 21:07:14.924752] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:03.412 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:03.412 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:03.412 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 2 00:08:03.412 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:03.412 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:03.412 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:03.412 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:03.412 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:03.412 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:03.412 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:03.412 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:03.412 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:03.412 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:03.412 21:07:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:03.671 21:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:03.671 "name": "Existed_Raid", 00:08:03.671 "uuid": "0bb26141-4225-11ef-aa83-81fbc7dfef58", 00:08:03.671 "strip_size_kb": 64, 00:08:03.671 "state": "configuring", 00:08:03.671 "raid_level": "concat", 00:08:03.671 "superblock": true, 00:08:03.671 "num_base_bdevs": 2, 00:08:03.671 "num_base_bdevs_discovered": 1, 00:08:03.671 "num_base_bdevs_operational": 2, 00:08:03.671 "base_bdevs_list": [ 00:08:03.671 { 00:08:03.671 "name": "BaseBdev1", 00:08:03.671 "uuid": "0ac4577b-4225-11ef-aa83-81fbc7dfef58", 00:08:03.671 "is_configured": true, 00:08:03.671 "data_offset": 2048, 00:08:03.671 "data_size": 63488 00:08:03.671 }, 00:08:03.671 { 00:08:03.671 "name": "BaseBdev2", 00:08:03.671 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:03.671 "is_configured": false, 00:08:03.671 "data_offset": 0, 00:08:03.671 "data_size": 0 00:08:03.671 } 00:08:03.671 ] 00:08:03.671 }' 00:08:03.671 21:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:03.671 21:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:03.930 21:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:04.188 [2024-07-14 21:07:15.719921] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:04.188 [2024-07-14 21:07:15.720003] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x36f2f5e34a00 00:08:04.188 [2024-07-14 21:07:15.720010] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:04.188 [2024-07-14 21:07:15.720030] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x36f2f5e97e20 00:08:04.188 [2024-07-14 21:07:15.720079] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x36f2f5e34a00 00:08:04.188 [2024-07-14 21:07:15.720084] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x36f2f5e34a00 00:08:04.188 [2024-07-14 21:07:15.720105] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:04.188 BaseBdev2 00:08:04.188 21:07:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:04.188 21:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:04.188 21:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:04.188 21:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:04.188 21:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:04.188 21:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:04.188 21:07:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:04.756 21:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:04.756 [ 00:08:04.756 { 00:08:04.756 "name": "BaseBdev2", 00:08:04.757 "aliases": [ 00:08:04.757 "0c2bd684-4225-11ef-aa83-81fbc7dfef58" 00:08:04.757 ], 00:08:04.757 "product_name": "Malloc disk", 00:08:04.757 "block_size": 512, 00:08:04.757 "num_blocks": 65536, 00:08:04.757 "uuid": "0c2bd684-4225-11ef-aa83-81fbc7dfef58", 00:08:04.757 "assigned_rate_limits": { 00:08:04.757 "rw_ios_per_sec": 0, 00:08:04.757 "rw_mbytes_per_sec": 0, 00:08:04.757 "r_mbytes_per_sec": 0, 00:08:04.757 "w_mbytes_per_sec": 0 00:08:04.757 }, 00:08:04.757 "claimed": true, 00:08:04.757 "claim_type": "exclusive_write", 00:08:04.757 "zoned": false, 00:08:04.757 "supported_io_types": { 00:08:04.757 "read": true, 00:08:04.757 "write": true, 00:08:04.757 "unmap": true, 00:08:04.757 "flush": true, 00:08:04.757 "reset": true, 00:08:04.757 "nvme_admin": false, 00:08:04.757 "nvme_io": false, 00:08:04.757 "nvme_io_md": false, 00:08:04.757 "write_zeroes": true, 00:08:04.757 "zcopy": true, 00:08:04.757 "get_zone_info": false, 00:08:04.757 "zone_management": false, 00:08:04.757 "zone_append": false, 00:08:04.757 "compare": false, 00:08:04.757 "compare_and_write": false, 00:08:04.757 "abort": true, 00:08:04.757 "seek_hole": false, 00:08:04.757 "seek_data": false, 00:08:04.757 "copy": true, 00:08:04.757 "nvme_iov_md": false 00:08:04.757 }, 00:08:04.757 "memory_domains": [ 00:08:04.757 { 00:08:04.757 "dma_device_id": "system", 00:08:04.757 "dma_device_type": 1 00:08:04.757 }, 00:08:04.757 { 00:08:04.757 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:04.757 "dma_device_type": 2 00:08:04.757 } 00:08:04.757 ], 00:08:04.757 "driver_specific": {} 00:08:04.757 } 00:08:04.757 ] 00:08:04.757 21:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:04.757 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:04.757 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:04.757 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 2 00:08:04.757 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:04.757 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:04.757 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:04.757 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:04.757 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:04.757 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:04.757 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:04.757 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:04.757 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:04.757 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:04.757 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:05.016 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:05.016 "name": "Existed_Raid", 00:08:05.016 "uuid": "0bb26141-4225-11ef-aa83-81fbc7dfef58", 00:08:05.016 "strip_size_kb": 64, 00:08:05.016 "state": "online", 00:08:05.016 "raid_level": "concat", 00:08:05.016 "superblock": true, 00:08:05.016 "num_base_bdevs": 2, 00:08:05.016 "num_base_bdevs_discovered": 2, 00:08:05.016 "num_base_bdevs_operational": 2, 00:08:05.016 "base_bdevs_list": [ 00:08:05.016 { 00:08:05.016 "name": "BaseBdev1", 00:08:05.016 "uuid": "0ac4577b-4225-11ef-aa83-81fbc7dfef58", 00:08:05.016 "is_configured": true, 00:08:05.016 "data_offset": 2048, 00:08:05.016 "data_size": 63488 00:08:05.016 }, 00:08:05.016 { 00:08:05.016 "name": "BaseBdev2", 00:08:05.016 "uuid": "0c2bd684-4225-11ef-aa83-81fbc7dfef58", 00:08:05.016 "is_configured": true, 00:08:05.016 "data_offset": 2048, 00:08:05.016 "data_size": 63488 00:08:05.016 } 00:08:05.016 ] 00:08:05.016 }' 00:08:05.016 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:05.016 21:07:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:05.275 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:05.275 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:05.275 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:05.275 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:05.275 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:05.275 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:08:05.275 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:05.275 21:07:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:05.533 [2024-07-14 21:07:16.991777] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:05.533 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:05.533 "name": "Existed_Raid", 00:08:05.533 "aliases": [ 00:08:05.533 "0bb26141-4225-11ef-aa83-81fbc7dfef58" 00:08:05.533 ], 00:08:05.533 "product_name": "Raid Volume", 00:08:05.533 "block_size": 512, 00:08:05.533 "num_blocks": 126976, 00:08:05.533 "uuid": "0bb26141-4225-11ef-aa83-81fbc7dfef58", 00:08:05.533 "assigned_rate_limits": { 00:08:05.533 "rw_ios_per_sec": 0, 00:08:05.533 "rw_mbytes_per_sec": 0, 00:08:05.533 "r_mbytes_per_sec": 0, 00:08:05.533 "w_mbytes_per_sec": 0 00:08:05.533 }, 00:08:05.533 "claimed": false, 00:08:05.533 "zoned": false, 00:08:05.533 "supported_io_types": { 00:08:05.533 "read": true, 00:08:05.533 "write": true, 00:08:05.533 "unmap": true, 00:08:05.533 "flush": true, 00:08:05.533 "reset": true, 00:08:05.533 "nvme_admin": false, 00:08:05.533 "nvme_io": false, 00:08:05.533 "nvme_io_md": false, 00:08:05.533 "write_zeroes": true, 00:08:05.533 "zcopy": false, 00:08:05.533 "get_zone_info": false, 00:08:05.533 "zone_management": false, 00:08:05.533 "zone_append": false, 00:08:05.533 "compare": false, 00:08:05.533 "compare_and_write": false, 00:08:05.533 "abort": false, 00:08:05.533 "seek_hole": false, 00:08:05.533 "seek_data": false, 00:08:05.533 "copy": false, 00:08:05.533 "nvme_iov_md": false 00:08:05.533 }, 00:08:05.533 "memory_domains": [ 00:08:05.533 { 00:08:05.533 "dma_device_id": "system", 00:08:05.533 "dma_device_type": 1 00:08:05.533 }, 00:08:05.533 { 00:08:05.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.533 "dma_device_type": 2 00:08:05.533 }, 00:08:05.533 { 00:08:05.533 "dma_device_id": "system", 00:08:05.533 "dma_device_type": 1 00:08:05.533 }, 00:08:05.533 { 00:08:05.533 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.533 "dma_device_type": 2 00:08:05.533 } 00:08:05.533 ], 00:08:05.533 "driver_specific": { 00:08:05.533 "raid": { 00:08:05.533 "uuid": "0bb26141-4225-11ef-aa83-81fbc7dfef58", 00:08:05.533 "strip_size_kb": 64, 00:08:05.533 "state": "online", 00:08:05.533 "raid_level": "concat", 00:08:05.533 "superblock": true, 00:08:05.533 "num_base_bdevs": 2, 00:08:05.533 "num_base_bdevs_discovered": 2, 00:08:05.533 "num_base_bdevs_operational": 2, 00:08:05.533 "base_bdevs_list": [ 00:08:05.533 { 00:08:05.533 "name": "BaseBdev1", 00:08:05.533 "uuid": "0ac4577b-4225-11ef-aa83-81fbc7dfef58", 00:08:05.533 "is_configured": true, 00:08:05.533 "data_offset": 2048, 00:08:05.533 "data_size": 63488 00:08:05.533 }, 00:08:05.533 { 00:08:05.533 "name": "BaseBdev2", 00:08:05.533 "uuid": "0c2bd684-4225-11ef-aa83-81fbc7dfef58", 00:08:05.533 "is_configured": true, 00:08:05.533 "data_offset": 2048, 00:08:05.533 "data_size": 63488 00:08:05.533 } 00:08:05.533 ] 00:08:05.533 } 00:08:05.533 } 00:08:05.533 }' 00:08:05.533 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:05.533 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:05.533 BaseBdev2' 00:08:05.533 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:05.533 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:05.533 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:05.791 "name": "BaseBdev1", 00:08:05.791 "aliases": [ 00:08:05.791 "0ac4577b-4225-11ef-aa83-81fbc7dfef58" 00:08:05.791 ], 00:08:05.791 "product_name": "Malloc disk", 00:08:05.791 "block_size": 512, 00:08:05.791 "num_blocks": 65536, 00:08:05.791 "uuid": "0ac4577b-4225-11ef-aa83-81fbc7dfef58", 00:08:05.791 "assigned_rate_limits": { 00:08:05.791 "rw_ios_per_sec": 0, 00:08:05.791 "rw_mbytes_per_sec": 0, 00:08:05.791 "r_mbytes_per_sec": 0, 00:08:05.791 "w_mbytes_per_sec": 0 00:08:05.791 }, 00:08:05.791 "claimed": true, 00:08:05.791 "claim_type": "exclusive_write", 00:08:05.791 "zoned": false, 00:08:05.791 "supported_io_types": { 00:08:05.791 "read": true, 00:08:05.791 "write": true, 00:08:05.791 "unmap": true, 00:08:05.791 "flush": true, 00:08:05.791 "reset": true, 00:08:05.791 "nvme_admin": false, 00:08:05.791 "nvme_io": false, 00:08:05.791 "nvme_io_md": false, 00:08:05.791 "write_zeroes": true, 00:08:05.791 "zcopy": true, 00:08:05.791 "get_zone_info": false, 00:08:05.791 "zone_management": false, 00:08:05.791 "zone_append": false, 00:08:05.791 "compare": false, 00:08:05.791 "compare_and_write": false, 00:08:05.791 "abort": true, 00:08:05.791 "seek_hole": false, 00:08:05.791 "seek_data": false, 00:08:05.791 "copy": true, 00:08:05.791 "nvme_iov_md": false 00:08:05.791 }, 00:08:05.791 "memory_domains": [ 00:08:05.791 { 00:08:05.791 "dma_device_id": "system", 00:08:05.791 "dma_device_type": 1 00:08:05.791 }, 00:08:05.791 { 00:08:05.791 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:05.791 "dma_device_type": 2 00:08:05.791 } 00:08:05.791 ], 00:08:05.791 "driver_specific": {} 00:08:05.791 }' 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:05.791 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:06.049 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:06.049 "name": "BaseBdev2", 00:08:06.049 "aliases": [ 00:08:06.049 "0c2bd684-4225-11ef-aa83-81fbc7dfef58" 00:08:06.049 ], 00:08:06.049 "product_name": "Malloc disk", 00:08:06.049 "block_size": 512, 00:08:06.049 "num_blocks": 65536, 00:08:06.049 "uuid": "0c2bd684-4225-11ef-aa83-81fbc7dfef58", 00:08:06.049 "assigned_rate_limits": { 00:08:06.049 "rw_ios_per_sec": 0, 00:08:06.049 "rw_mbytes_per_sec": 0, 00:08:06.049 "r_mbytes_per_sec": 0, 00:08:06.049 "w_mbytes_per_sec": 0 00:08:06.049 }, 00:08:06.049 "claimed": true, 00:08:06.049 "claim_type": "exclusive_write", 00:08:06.049 "zoned": false, 00:08:06.049 "supported_io_types": { 00:08:06.049 "read": true, 00:08:06.049 "write": true, 00:08:06.049 "unmap": true, 00:08:06.049 "flush": true, 00:08:06.049 "reset": true, 00:08:06.049 "nvme_admin": false, 00:08:06.049 "nvme_io": false, 00:08:06.049 "nvme_io_md": false, 00:08:06.049 "write_zeroes": true, 00:08:06.049 "zcopy": true, 00:08:06.049 "get_zone_info": false, 00:08:06.049 "zone_management": false, 00:08:06.049 "zone_append": false, 00:08:06.049 "compare": false, 00:08:06.049 "compare_and_write": false, 00:08:06.049 "abort": true, 00:08:06.049 "seek_hole": false, 00:08:06.049 "seek_data": false, 00:08:06.049 "copy": true, 00:08:06.049 "nvme_iov_md": false 00:08:06.050 }, 00:08:06.050 "memory_domains": [ 00:08:06.050 { 00:08:06.050 "dma_device_id": "system", 00:08:06.050 "dma_device_type": 1 00:08:06.050 }, 00:08:06.050 { 00:08:06.050 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:06.050 "dma_device_type": 2 00:08:06.050 } 00:08:06.050 ], 00:08:06.050 "driver_specific": {} 00:08:06.050 }' 00:08:06.050 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:06.050 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:06.050 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:06.050 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:06.050 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:06.308 [2024-07-14 21:07:17.827760] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:06.308 [2024-07-14 21:07:17.827793] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:06.308 [2024-07-14 21:07:17.827810] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 1 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:06.308 21:07:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:06.597 21:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:06.597 "name": "Existed_Raid", 00:08:06.597 "uuid": "0bb26141-4225-11ef-aa83-81fbc7dfef58", 00:08:06.597 "strip_size_kb": 64, 00:08:06.597 "state": "offline", 00:08:06.597 "raid_level": "concat", 00:08:06.597 "superblock": true, 00:08:06.597 "num_base_bdevs": 2, 00:08:06.597 "num_base_bdevs_discovered": 1, 00:08:06.597 "num_base_bdevs_operational": 1, 00:08:06.597 "base_bdevs_list": [ 00:08:06.597 { 00:08:06.597 "name": null, 00:08:06.597 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:06.597 "is_configured": false, 00:08:06.597 "data_offset": 2048, 00:08:06.597 "data_size": 63488 00:08:06.597 }, 00:08:06.597 { 00:08:06.597 "name": "BaseBdev2", 00:08:06.597 "uuid": "0c2bd684-4225-11ef-aa83-81fbc7dfef58", 00:08:06.597 "is_configured": true, 00:08:06.597 "data_offset": 2048, 00:08:06.597 "data_size": 63488 00:08:06.597 } 00:08:06.597 ] 00:08:06.597 }' 00:08:06.597 21:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:06.597 21:07:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.164 21:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:07.164 21:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:07.164 21:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:07.164 21:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:07.423 21:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:07.423 21:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:07.423 21:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:07.681 [2024-07-14 21:07:18.976268] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:07.681 [2024-07-14 21:07:18.976317] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x36f2f5e34a00 name Existed_Raid, state offline 00:08:07.681 21:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:07.681 21:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:07.681 21:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:07.681 21:07:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 49946 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 49946 ']' 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 49946 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 49946 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 49946' 00:08:07.939 killing process with pid 49946 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 49946 00:08:07.939 [2024-07-14 21:07:19.303873] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:07.939 [2024-07-14 21:07:19.303906] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 49946 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:08:07.939 00:08:07.939 real 0m8.833s 00:08:07.939 user 0m15.296s 00:08:07.939 sys 0m1.626s 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:07.939 21:07:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:07.939 ************************************ 00:08:07.939 END TEST raid_state_function_test_sb 00:08:07.939 ************************************ 00:08:08.198 21:07:19 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:08.198 21:07:19 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 2 00:08:08.198 21:07:19 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:08.198 21:07:19 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:08.198 21:07:19 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:08.198 ************************************ 00:08:08.198 START TEST raid_superblock_test 00:08:08.198 ************************************ 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 2 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=50220 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 50220 /var/tmp/spdk-raid.sock 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 50220 ']' 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:08.198 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:08.198 21:07:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:08.198 [2024-07-14 21:07:19.536380] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:08.198 [2024-07-14 21:07:19.536611] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:08.765 EAL: TSC is not safe to use in SMP mode 00:08:08.765 EAL: TSC is not invariant 00:08:08.765 [2024-07-14 21:07:20.079669] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:08.765 [2024-07-14 21:07:20.159135] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:08.765 [2024-07-14 21:07:20.161551] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.765 [2024-07-14 21:07:20.162447] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:08.765 [2024-07-14 21:07:20.162476] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:09.025 21:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:09.025 21:07:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:08:09.025 21:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:08:09.025 21:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:09.025 21:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:08:09.025 21:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:08:09.025 21:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:09.025 21:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:09.025 21:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:09.025 21:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:09.025 21:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:09.284 malloc1 00:08:09.284 21:07:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:09.543 [2024-07-14 21:07:21.033461] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:09.543 [2024-07-14 21:07:21.033531] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:09.543 [2024-07-14 21:07:21.033558] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x184c54c34780 00:08:09.543 [2024-07-14 21:07:21.033566] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:09.543 [2024-07-14 21:07:21.034645] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:09.543 [2024-07-14 21:07:21.034669] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:09.543 pt1 00:08:09.543 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:09.543 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:09.543 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:08:09.543 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:08:09.543 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:09.544 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:09.544 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:09.544 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:09.544 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:09.803 malloc2 00:08:09.803 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:10.061 [2024-07-14 21:07:21.505501] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:10.062 [2024-07-14 21:07:21.505568] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:10.062 [2024-07-14 21:07:21.505596] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x184c54c34c80 00:08:10.062 [2024-07-14 21:07:21.505603] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:10.062 [2024-07-14 21:07:21.506397] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:10.062 [2024-07-14 21:07:21.506423] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:10.062 pt2 00:08:10.062 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:10.062 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:10.062 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2' -n raid_bdev1 -s 00:08:10.320 [2024-07-14 21:07:21.781529] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:10.320 [2024-07-14 21:07:21.782109] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:10.320 [2024-07-14 21:07:21.782165] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x184c54c34f00 00:08:10.320 [2024-07-14 21:07:21.782171] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:10.320 [2024-07-14 21:07:21.782204] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x184c54c97e20 00:08:10.320 [2024-07-14 21:07:21.782281] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x184c54c34f00 00:08:10.320 [2024-07-14 21:07:21.782285] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x184c54c34f00 00:08:10.320 [2024-07-14 21:07:21.782314] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:10.320 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:10.320 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:10.320 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:10.320 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:10.320 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:10.320 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:10.320 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:10.320 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:10.320 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:10.320 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:10.320 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:10.320 21:07:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:10.576 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:10.576 "name": "raid_bdev1", 00:08:10.576 "uuid": "0fc8ca8a-4225-11ef-aa83-81fbc7dfef58", 00:08:10.576 "strip_size_kb": 64, 00:08:10.576 "state": "online", 00:08:10.576 "raid_level": "concat", 00:08:10.576 "superblock": true, 00:08:10.576 "num_base_bdevs": 2, 00:08:10.576 "num_base_bdevs_discovered": 2, 00:08:10.576 "num_base_bdevs_operational": 2, 00:08:10.576 "base_bdevs_list": [ 00:08:10.576 { 00:08:10.576 "name": "pt1", 00:08:10.576 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:10.576 "is_configured": true, 00:08:10.576 "data_offset": 2048, 00:08:10.576 "data_size": 63488 00:08:10.576 }, 00:08:10.576 { 00:08:10.576 "name": "pt2", 00:08:10.576 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:10.576 "is_configured": true, 00:08:10.576 "data_offset": 2048, 00:08:10.576 "data_size": 63488 00:08:10.576 } 00:08:10.576 ] 00:08:10.576 }' 00:08:10.576 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:10.576 21:07:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:11.140 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:08:11.140 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:11.140 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:11.140 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:11.140 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:11.140 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:11.140 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:11.140 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:11.397 [2024-07-14 21:07:22.701527] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:11.398 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:11.398 "name": "raid_bdev1", 00:08:11.398 "aliases": [ 00:08:11.398 "0fc8ca8a-4225-11ef-aa83-81fbc7dfef58" 00:08:11.398 ], 00:08:11.398 "product_name": "Raid Volume", 00:08:11.398 "block_size": 512, 00:08:11.398 "num_blocks": 126976, 00:08:11.398 "uuid": "0fc8ca8a-4225-11ef-aa83-81fbc7dfef58", 00:08:11.398 "assigned_rate_limits": { 00:08:11.398 "rw_ios_per_sec": 0, 00:08:11.398 "rw_mbytes_per_sec": 0, 00:08:11.398 "r_mbytes_per_sec": 0, 00:08:11.398 "w_mbytes_per_sec": 0 00:08:11.398 }, 00:08:11.398 "claimed": false, 00:08:11.398 "zoned": false, 00:08:11.398 "supported_io_types": { 00:08:11.398 "read": true, 00:08:11.398 "write": true, 00:08:11.398 "unmap": true, 00:08:11.398 "flush": true, 00:08:11.398 "reset": true, 00:08:11.398 "nvme_admin": false, 00:08:11.398 "nvme_io": false, 00:08:11.398 "nvme_io_md": false, 00:08:11.398 "write_zeroes": true, 00:08:11.398 "zcopy": false, 00:08:11.398 "get_zone_info": false, 00:08:11.398 "zone_management": false, 00:08:11.398 "zone_append": false, 00:08:11.398 "compare": false, 00:08:11.398 "compare_and_write": false, 00:08:11.398 "abort": false, 00:08:11.398 "seek_hole": false, 00:08:11.398 "seek_data": false, 00:08:11.398 "copy": false, 00:08:11.398 "nvme_iov_md": false 00:08:11.398 }, 00:08:11.398 "memory_domains": [ 00:08:11.398 { 00:08:11.398 "dma_device_id": "system", 00:08:11.398 "dma_device_type": 1 00:08:11.398 }, 00:08:11.398 { 00:08:11.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.398 "dma_device_type": 2 00:08:11.398 }, 00:08:11.398 { 00:08:11.398 "dma_device_id": "system", 00:08:11.398 "dma_device_type": 1 00:08:11.398 }, 00:08:11.398 { 00:08:11.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.398 "dma_device_type": 2 00:08:11.398 } 00:08:11.398 ], 00:08:11.398 "driver_specific": { 00:08:11.398 "raid": { 00:08:11.398 "uuid": "0fc8ca8a-4225-11ef-aa83-81fbc7dfef58", 00:08:11.398 "strip_size_kb": 64, 00:08:11.398 "state": "online", 00:08:11.398 "raid_level": "concat", 00:08:11.398 "superblock": true, 00:08:11.398 "num_base_bdevs": 2, 00:08:11.398 "num_base_bdevs_discovered": 2, 00:08:11.398 "num_base_bdevs_operational": 2, 00:08:11.398 "base_bdevs_list": [ 00:08:11.398 { 00:08:11.398 "name": "pt1", 00:08:11.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.398 "is_configured": true, 00:08:11.398 "data_offset": 2048, 00:08:11.398 "data_size": 63488 00:08:11.398 }, 00:08:11.398 { 00:08:11.398 "name": "pt2", 00:08:11.398 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.398 "is_configured": true, 00:08:11.398 "data_offset": 2048, 00:08:11.398 "data_size": 63488 00:08:11.398 } 00:08:11.398 ] 00:08:11.398 } 00:08:11.398 } 00:08:11.398 }' 00:08:11.398 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:11.398 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:11.398 pt2' 00:08:11.398 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:11.398 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:11.398 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:11.398 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:11.398 "name": "pt1", 00:08:11.398 "aliases": [ 00:08:11.398 "00000000-0000-0000-0000-000000000001" 00:08:11.398 ], 00:08:11.398 "product_name": "passthru", 00:08:11.398 "block_size": 512, 00:08:11.398 "num_blocks": 65536, 00:08:11.398 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:11.398 "assigned_rate_limits": { 00:08:11.398 "rw_ios_per_sec": 0, 00:08:11.398 "rw_mbytes_per_sec": 0, 00:08:11.398 "r_mbytes_per_sec": 0, 00:08:11.398 "w_mbytes_per_sec": 0 00:08:11.398 }, 00:08:11.398 "claimed": true, 00:08:11.398 "claim_type": "exclusive_write", 00:08:11.398 "zoned": false, 00:08:11.398 "supported_io_types": { 00:08:11.398 "read": true, 00:08:11.398 "write": true, 00:08:11.398 "unmap": true, 00:08:11.398 "flush": true, 00:08:11.398 "reset": true, 00:08:11.398 "nvme_admin": false, 00:08:11.398 "nvme_io": false, 00:08:11.398 "nvme_io_md": false, 00:08:11.398 "write_zeroes": true, 00:08:11.398 "zcopy": true, 00:08:11.398 "get_zone_info": false, 00:08:11.398 "zone_management": false, 00:08:11.398 "zone_append": false, 00:08:11.398 "compare": false, 00:08:11.398 "compare_and_write": false, 00:08:11.398 "abort": true, 00:08:11.398 "seek_hole": false, 00:08:11.398 "seek_data": false, 00:08:11.398 "copy": true, 00:08:11.398 "nvme_iov_md": false 00:08:11.398 }, 00:08:11.398 "memory_domains": [ 00:08:11.398 { 00:08:11.398 "dma_device_id": "system", 00:08:11.398 "dma_device_type": 1 00:08:11.398 }, 00:08:11.398 { 00:08:11.398 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.398 "dma_device_type": 2 00:08:11.398 } 00:08:11.398 ], 00:08:11.398 "driver_specific": { 00:08:11.398 "passthru": { 00:08:11.398 "name": "pt1", 00:08:11.398 "base_bdev_name": "malloc1" 00:08:11.398 } 00:08:11.398 } 00:08:11.398 }' 00:08:11.398 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:11.398 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:11.398 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:11.398 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:11.656 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:11.656 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:11.656 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:11.656 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:11.656 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:11.656 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:11.656 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:11.656 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:11.656 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:11.656 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:11.656 21:07:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:11.914 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:11.914 "name": "pt2", 00:08:11.914 "aliases": [ 00:08:11.914 "00000000-0000-0000-0000-000000000002" 00:08:11.914 ], 00:08:11.914 "product_name": "passthru", 00:08:11.914 "block_size": 512, 00:08:11.914 "num_blocks": 65536, 00:08:11.914 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:11.914 "assigned_rate_limits": { 00:08:11.914 "rw_ios_per_sec": 0, 00:08:11.914 "rw_mbytes_per_sec": 0, 00:08:11.914 "r_mbytes_per_sec": 0, 00:08:11.914 "w_mbytes_per_sec": 0 00:08:11.914 }, 00:08:11.914 "claimed": true, 00:08:11.914 "claim_type": "exclusive_write", 00:08:11.914 "zoned": false, 00:08:11.914 "supported_io_types": { 00:08:11.914 "read": true, 00:08:11.914 "write": true, 00:08:11.914 "unmap": true, 00:08:11.914 "flush": true, 00:08:11.914 "reset": true, 00:08:11.914 "nvme_admin": false, 00:08:11.914 "nvme_io": false, 00:08:11.914 "nvme_io_md": false, 00:08:11.914 "write_zeroes": true, 00:08:11.914 "zcopy": true, 00:08:11.914 "get_zone_info": false, 00:08:11.914 "zone_management": false, 00:08:11.914 "zone_append": false, 00:08:11.914 "compare": false, 00:08:11.914 "compare_and_write": false, 00:08:11.914 "abort": true, 00:08:11.914 "seek_hole": false, 00:08:11.914 "seek_data": false, 00:08:11.914 "copy": true, 00:08:11.914 "nvme_iov_md": false 00:08:11.914 }, 00:08:11.914 "memory_domains": [ 00:08:11.914 { 00:08:11.914 "dma_device_id": "system", 00:08:11.914 "dma_device_type": 1 00:08:11.914 }, 00:08:11.914 { 00:08:11.914 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:11.914 "dma_device_type": 2 00:08:11.914 } 00:08:11.914 ], 00:08:11.914 "driver_specific": { 00:08:11.914 "passthru": { 00:08:11.914 "name": "pt2", 00:08:11.914 "base_bdev_name": "malloc2" 00:08:11.914 } 00:08:11.914 } 00:08:11.914 }' 00:08:11.914 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:11.914 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:11.914 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:11.914 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:11.914 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:11.914 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:11.914 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:11.914 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:11.914 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:11.914 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:11.914 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:11.914 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:11.914 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:11.914 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:08:12.171 [2024-07-14 21:07:23.513565] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:12.171 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=0fc8ca8a-4225-11ef-aa83-81fbc7dfef58 00:08:12.171 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 0fc8ca8a-4225-11ef-aa83-81fbc7dfef58 ']' 00:08:12.171 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:12.429 [2024-07-14 21:07:23.753478] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:12.429 [2024-07-14 21:07:23.753507] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:12.429 [2024-07-14 21:07:23.753541] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:12.429 [2024-07-14 21:07:23.753555] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:12.429 [2024-07-14 21:07:23.753559] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x184c54c34f00 name raid_bdev1, state offline 00:08:12.429 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:12.429 21:07:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:08:12.688 21:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:08:12.688 21:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:08:12.688 21:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:12.688 21:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:12.688 21:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:12.688 21:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:12.947 21:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:12.947 21:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:13.206 21:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:08:13.206 21:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:13.206 21:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:08:13.206 21:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:13.206 21:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.206 21:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.206 21:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.206 21:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.206 21:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.206 21:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:13.206 21:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:13.206 21:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:13.206 21:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2' -n raid_bdev1 00:08:13.465 [2024-07-14 21:07:24.913544] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:13.465 [2024-07-14 21:07:24.914299] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:13.465 [2024-07-14 21:07:24.914330] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:13.465 [2024-07-14 21:07:24.914380] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:13.465 [2024-07-14 21:07:24.914392] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:13.465 [2024-07-14 21:07:24.914396] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x184c54c34c80 name raid_bdev1, state configuring 00:08:13.465 request: 00:08:13.465 { 00:08:13.465 "name": "raid_bdev1", 00:08:13.465 "raid_level": "concat", 00:08:13.465 "base_bdevs": [ 00:08:13.465 "malloc1", 00:08:13.465 "malloc2" 00:08:13.465 ], 00:08:13.465 "strip_size_kb": 64, 00:08:13.465 "superblock": false, 00:08:13.465 "method": "bdev_raid_create", 00:08:13.465 "req_id": 1 00:08:13.465 } 00:08:13.465 Got JSON-RPC error response 00:08:13.465 response: 00:08:13.465 { 00:08:13.465 "code": -17, 00:08:13.465 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:13.465 } 00:08:13.465 21:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:08:13.465 21:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:13.465 21:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:13.465 21:07:24 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:13.465 21:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:13.465 21:07:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:08:13.724 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:08:13.724 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:08:13.724 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:13.983 [2024-07-14 21:07:25.369523] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:13.983 [2024-07-14 21:07:25.369658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:13.983 [2024-07-14 21:07:25.369671] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x184c54c34780 00:08:13.983 [2024-07-14 21:07:25.369679] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:13.983 [2024-07-14 21:07:25.370635] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:13.983 [2024-07-14 21:07:25.370663] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:13.983 [2024-07-14 21:07:25.370694] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:13.983 [2024-07-14 21:07:25.370708] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:13.983 pt1 00:08:13.983 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 2 00:08:13.983 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:13.983 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:13.983 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:13.983 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:13.983 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:13.983 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:13.983 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:13.983 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:13.983 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:13.983 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:13.983 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:14.242 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:14.242 "name": "raid_bdev1", 00:08:14.242 "uuid": "0fc8ca8a-4225-11ef-aa83-81fbc7dfef58", 00:08:14.242 "strip_size_kb": 64, 00:08:14.242 "state": "configuring", 00:08:14.242 "raid_level": "concat", 00:08:14.242 "superblock": true, 00:08:14.242 "num_base_bdevs": 2, 00:08:14.242 "num_base_bdevs_discovered": 1, 00:08:14.242 "num_base_bdevs_operational": 2, 00:08:14.242 "base_bdevs_list": [ 00:08:14.242 { 00:08:14.242 "name": "pt1", 00:08:14.242 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:14.242 "is_configured": true, 00:08:14.242 "data_offset": 2048, 00:08:14.242 "data_size": 63488 00:08:14.242 }, 00:08:14.242 { 00:08:14.242 "name": null, 00:08:14.242 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:14.242 "is_configured": false, 00:08:14.242 "data_offset": 2048, 00:08:14.242 "data_size": 63488 00:08:14.242 } 00:08:14.242 ] 00:08:14.242 }' 00:08:14.242 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:14.242 21:07:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:14.500 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:08:14.500 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:08:14.500 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:14.500 21:07:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:14.791 [2024-07-14 21:07:26.173505] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:14.791 [2024-07-14 21:07:26.173591] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:14.791 [2024-07-14 21:07:26.173609] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x184c54c34f00 00:08:14.791 [2024-07-14 21:07:26.173617] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:14.791 [2024-07-14 21:07:26.173769] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:14.791 [2024-07-14 21:07:26.173782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:14.791 [2024-07-14 21:07:26.173835] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:14.791 [2024-07-14 21:07:26.173844] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:14.791 [2024-07-14 21:07:26.173876] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x184c54c35180 00:08:14.791 [2024-07-14 21:07:26.173881] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:14.791 [2024-07-14 21:07:26.173899] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x184c54c97e20 00:08:14.791 [2024-07-14 21:07:26.173979] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x184c54c35180 00:08:14.791 [2024-07-14 21:07:26.173985] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x184c54c35180 00:08:14.791 [2024-07-14 21:07:26.174007] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:14.791 pt2 00:08:14.791 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:08:14.791 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:14.791 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:14.791 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:14.791 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:14.791 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:14.791 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:14.791 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:14.791 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:14.791 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:14.792 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:14.792 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:14.792 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:14.792 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:15.050 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:15.050 "name": "raid_bdev1", 00:08:15.050 "uuid": "0fc8ca8a-4225-11ef-aa83-81fbc7dfef58", 00:08:15.050 "strip_size_kb": 64, 00:08:15.050 "state": "online", 00:08:15.050 "raid_level": "concat", 00:08:15.050 "superblock": true, 00:08:15.050 "num_base_bdevs": 2, 00:08:15.050 "num_base_bdevs_discovered": 2, 00:08:15.050 "num_base_bdevs_operational": 2, 00:08:15.050 "base_bdevs_list": [ 00:08:15.050 { 00:08:15.050 "name": "pt1", 00:08:15.050 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.050 "is_configured": true, 00:08:15.050 "data_offset": 2048, 00:08:15.050 "data_size": 63488 00:08:15.050 }, 00:08:15.050 { 00:08:15.050 "name": "pt2", 00:08:15.050 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.050 "is_configured": true, 00:08:15.050 "data_offset": 2048, 00:08:15.050 "data_size": 63488 00:08:15.050 } 00:08:15.050 ] 00:08:15.050 }' 00:08:15.050 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:15.050 21:07:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:15.309 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:08:15.309 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:15.309 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:15.309 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:15.309 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:15.309 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:15.309 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:15.309 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:15.568 [2024-07-14 21:07:26.957530] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:15.568 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:15.568 "name": "raid_bdev1", 00:08:15.568 "aliases": [ 00:08:15.568 "0fc8ca8a-4225-11ef-aa83-81fbc7dfef58" 00:08:15.568 ], 00:08:15.568 "product_name": "Raid Volume", 00:08:15.568 "block_size": 512, 00:08:15.568 "num_blocks": 126976, 00:08:15.568 "uuid": "0fc8ca8a-4225-11ef-aa83-81fbc7dfef58", 00:08:15.568 "assigned_rate_limits": { 00:08:15.568 "rw_ios_per_sec": 0, 00:08:15.568 "rw_mbytes_per_sec": 0, 00:08:15.568 "r_mbytes_per_sec": 0, 00:08:15.568 "w_mbytes_per_sec": 0 00:08:15.568 }, 00:08:15.568 "claimed": false, 00:08:15.568 "zoned": false, 00:08:15.568 "supported_io_types": { 00:08:15.568 "read": true, 00:08:15.568 "write": true, 00:08:15.568 "unmap": true, 00:08:15.568 "flush": true, 00:08:15.568 "reset": true, 00:08:15.568 "nvme_admin": false, 00:08:15.568 "nvme_io": false, 00:08:15.568 "nvme_io_md": false, 00:08:15.568 "write_zeroes": true, 00:08:15.568 "zcopy": false, 00:08:15.568 "get_zone_info": false, 00:08:15.568 "zone_management": false, 00:08:15.568 "zone_append": false, 00:08:15.568 "compare": false, 00:08:15.568 "compare_and_write": false, 00:08:15.568 "abort": false, 00:08:15.568 "seek_hole": false, 00:08:15.568 "seek_data": false, 00:08:15.568 "copy": false, 00:08:15.568 "nvme_iov_md": false 00:08:15.568 }, 00:08:15.568 "memory_domains": [ 00:08:15.568 { 00:08:15.568 "dma_device_id": "system", 00:08:15.568 "dma_device_type": 1 00:08:15.568 }, 00:08:15.568 { 00:08:15.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.568 "dma_device_type": 2 00:08:15.568 }, 00:08:15.568 { 00:08:15.568 "dma_device_id": "system", 00:08:15.568 "dma_device_type": 1 00:08:15.568 }, 00:08:15.568 { 00:08:15.568 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.568 "dma_device_type": 2 00:08:15.568 } 00:08:15.568 ], 00:08:15.568 "driver_specific": { 00:08:15.568 "raid": { 00:08:15.568 "uuid": "0fc8ca8a-4225-11ef-aa83-81fbc7dfef58", 00:08:15.568 "strip_size_kb": 64, 00:08:15.568 "state": "online", 00:08:15.568 "raid_level": "concat", 00:08:15.568 "superblock": true, 00:08:15.568 "num_base_bdevs": 2, 00:08:15.568 "num_base_bdevs_discovered": 2, 00:08:15.568 "num_base_bdevs_operational": 2, 00:08:15.568 "base_bdevs_list": [ 00:08:15.568 { 00:08:15.568 "name": "pt1", 00:08:15.568 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.568 "is_configured": true, 00:08:15.568 "data_offset": 2048, 00:08:15.568 "data_size": 63488 00:08:15.568 }, 00:08:15.568 { 00:08:15.568 "name": "pt2", 00:08:15.568 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:15.568 "is_configured": true, 00:08:15.568 "data_offset": 2048, 00:08:15.568 "data_size": 63488 00:08:15.568 } 00:08:15.568 ] 00:08:15.568 } 00:08:15.568 } 00:08:15.568 }' 00:08:15.568 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:15.568 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:15.568 pt2' 00:08:15.568 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:15.568 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:15.568 21:07:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:15.827 "name": "pt1", 00:08:15.827 "aliases": [ 00:08:15.827 "00000000-0000-0000-0000-000000000001" 00:08:15.827 ], 00:08:15.827 "product_name": "passthru", 00:08:15.827 "block_size": 512, 00:08:15.827 "num_blocks": 65536, 00:08:15.827 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:15.827 "assigned_rate_limits": { 00:08:15.827 "rw_ios_per_sec": 0, 00:08:15.827 "rw_mbytes_per_sec": 0, 00:08:15.827 "r_mbytes_per_sec": 0, 00:08:15.827 "w_mbytes_per_sec": 0 00:08:15.827 }, 00:08:15.827 "claimed": true, 00:08:15.827 "claim_type": "exclusive_write", 00:08:15.827 "zoned": false, 00:08:15.827 "supported_io_types": { 00:08:15.827 "read": true, 00:08:15.827 "write": true, 00:08:15.827 "unmap": true, 00:08:15.827 "flush": true, 00:08:15.827 "reset": true, 00:08:15.827 "nvme_admin": false, 00:08:15.827 "nvme_io": false, 00:08:15.827 "nvme_io_md": false, 00:08:15.827 "write_zeroes": true, 00:08:15.827 "zcopy": true, 00:08:15.827 "get_zone_info": false, 00:08:15.827 "zone_management": false, 00:08:15.827 "zone_append": false, 00:08:15.827 "compare": false, 00:08:15.827 "compare_and_write": false, 00:08:15.827 "abort": true, 00:08:15.827 "seek_hole": false, 00:08:15.827 "seek_data": false, 00:08:15.827 "copy": true, 00:08:15.827 "nvme_iov_md": false 00:08:15.827 }, 00:08:15.827 "memory_domains": [ 00:08:15.827 { 00:08:15.827 "dma_device_id": "system", 00:08:15.827 "dma_device_type": 1 00:08:15.827 }, 00:08:15.827 { 00:08:15.827 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:15.827 "dma_device_type": 2 00:08:15.827 } 00:08:15.827 ], 00:08:15.827 "driver_specific": { 00:08:15.827 "passthru": { 00:08:15.827 "name": "pt1", 00:08:15.827 "base_bdev_name": "malloc1" 00:08:15.827 } 00:08:15.827 } 00:08:15.827 }' 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:15.827 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:16.086 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:16.086 "name": "pt2", 00:08:16.086 "aliases": [ 00:08:16.086 "00000000-0000-0000-0000-000000000002" 00:08:16.086 ], 00:08:16.086 "product_name": "passthru", 00:08:16.086 "block_size": 512, 00:08:16.086 "num_blocks": 65536, 00:08:16.086 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:16.086 "assigned_rate_limits": { 00:08:16.086 "rw_ios_per_sec": 0, 00:08:16.086 "rw_mbytes_per_sec": 0, 00:08:16.086 "r_mbytes_per_sec": 0, 00:08:16.086 "w_mbytes_per_sec": 0 00:08:16.086 }, 00:08:16.086 "claimed": true, 00:08:16.086 "claim_type": "exclusive_write", 00:08:16.086 "zoned": false, 00:08:16.086 "supported_io_types": { 00:08:16.086 "read": true, 00:08:16.086 "write": true, 00:08:16.086 "unmap": true, 00:08:16.086 "flush": true, 00:08:16.086 "reset": true, 00:08:16.086 "nvme_admin": false, 00:08:16.086 "nvme_io": false, 00:08:16.086 "nvme_io_md": false, 00:08:16.086 "write_zeroes": true, 00:08:16.086 "zcopy": true, 00:08:16.086 "get_zone_info": false, 00:08:16.086 "zone_management": false, 00:08:16.086 "zone_append": false, 00:08:16.086 "compare": false, 00:08:16.086 "compare_and_write": false, 00:08:16.086 "abort": true, 00:08:16.086 "seek_hole": false, 00:08:16.086 "seek_data": false, 00:08:16.086 "copy": true, 00:08:16.086 "nvme_iov_md": false 00:08:16.086 }, 00:08:16.086 "memory_domains": [ 00:08:16.086 { 00:08:16.086 "dma_device_id": "system", 00:08:16.086 "dma_device_type": 1 00:08:16.086 }, 00:08:16.086 { 00:08:16.086 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:16.086 "dma_device_type": 2 00:08:16.086 } 00:08:16.086 ], 00:08:16.086 "driver_specific": { 00:08:16.086 "passthru": { 00:08:16.086 "name": "pt2", 00:08:16.086 "base_bdev_name": "malloc2" 00:08:16.086 } 00:08:16.086 } 00:08:16.086 }' 00:08:16.086 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:16.086 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:16.086 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:16.086 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:16.086 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:16.086 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:16.086 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:16.086 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:16.086 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:16.086 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:16.086 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:16.086 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:16.086 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:16.086 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:08:16.345 [2024-07-14 21:07:27.801536] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 0fc8ca8a-4225-11ef-aa83-81fbc7dfef58 '!=' 0fc8ca8a-4225-11ef-aa83-81fbc7dfef58 ']' 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 50220 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 50220 ']' 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 50220 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 50220 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50220' 00:08:16.345 killing process with pid 50220 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 50220 00:08:16.345 [2024-07-14 21:07:27.829363] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:16.345 [2024-07-14 21:07:27.829386] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:16.345 [2024-07-14 21:07:27.829409] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:16.345 21:07:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 50220 00:08:16.345 [2024-07-14 21:07:27.829413] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x184c54c35180 name raid_bdev1, state offline 00:08:16.345 [2024-07-14 21:07:27.845894] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:16.604 21:07:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:08:16.604 00:08:16.604 real 0m8.547s 00:08:16.604 user 0m14.675s 00:08:16.604 sys 0m1.604s 00:08:16.604 21:07:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:16.604 21:07:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.604 ************************************ 00:08:16.604 END TEST raid_superblock_test 00:08:16.604 ************************************ 00:08:16.604 21:07:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:16.604 21:07:28 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 2 read 00:08:16.604 21:07:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:16.604 21:07:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:16.604 21:07:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:16.604 ************************************ 00:08:16.604 START TEST raid_read_error_test 00:08:16.604 ************************************ 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 read 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.Mx6qhotySZ 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50485 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50485 /var/tmp/spdk-raid.sock 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 50485 ']' 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:16.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:16.604 21:07:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:16.604 [2024-07-14 21:07:28.142264] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:16.604 [2024-07-14 21:07:28.142447] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:17.171 EAL: TSC is not safe to use in SMP mode 00:08:17.171 EAL: TSC is not invariant 00:08:17.171 [2024-07-14 21:07:28.673171] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:17.429 [2024-07-14 21:07:28.770774] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:17.429 [2024-07-14 21:07:28.773306] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:17.429 [2024-07-14 21:07:28.774207] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.429 [2024-07-14 21:07:28.774225] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:17.686 21:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:17.687 21:07:29 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:08:17.687 21:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:17.687 21:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:17.945 BaseBdev1_malloc 00:08:17.945 21:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:18.203 true 00:08:18.203 21:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:18.461 [2024-07-14 21:07:29.948208] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:18.461 [2024-07-14 21:07:29.948346] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:18.461 [2024-07-14 21:07:29.948395] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12b4cec34780 00:08:18.461 [2024-07-14 21:07:29.948414] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:18.461 [2024-07-14 21:07:29.949571] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:18.461 [2024-07-14 21:07:29.949612] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:18.461 BaseBdev1 00:08:18.461 21:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:18.461 21:07:29 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:18.719 BaseBdev2_malloc 00:08:18.719 21:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:18.977 true 00:08:18.977 21:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:19.234 [2024-07-14 21:07:30.660177] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:19.234 [2024-07-14 21:07:30.660251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:19.234 [2024-07-14 21:07:30.660285] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x12b4cec34c80 00:08:19.234 [2024-07-14 21:07:30.660294] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:19.234 [2024-07-14 21:07:30.661218] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:19.234 [2024-07-14 21:07:30.661243] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:19.234 BaseBdev2 00:08:19.234 21:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:19.493 [2024-07-14 21:07:30.916175] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:19.493 [2024-07-14 21:07:30.916916] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:19.493 [2024-07-14 21:07:30.916992] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x12b4cec34f00 00:08:19.493 [2024-07-14 21:07:30.916998] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:19.493 [2024-07-14 21:07:30.917054] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12b4ceca0e20 00:08:19.493 [2024-07-14 21:07:30.917150] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x12b4cec34f00 00:08:19.493 [2024-07-14 21:07:30.917154] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x12b4cec34f00 00:08:19.493 [2024-07-14 21:07:30.917194] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:19.493 21:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:19.493 21:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:19.493 21:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:19.493 21:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:19.494 21:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:19.494 21:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:19.494 21:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:19.494 21:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:19.494 21:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:19.494 21:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:19.494 21:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:19.494 21:07:30 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:19.752 21:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:19.752 "name": "raid_bdev1", 00:08:19.752 "uuid": "153aa099-4225-11ef-aa83-81fbc7dfef58", 00:08:19.752 "strip_size_kb": 64, 00:08:19.752 "state": "online", 00:08:19.752 "raid_level": "concat", 00:08:19.752 "superblock": true, 00:08:19.752 "num_base_bdevs": 2, 00:08:19.752 "num_base_bdevs_discovered": 2, 00:08:19.752 "num_base_bdevs_operational": 2, 00:08:19.752 "base_bdevs_list": [ 00:08:19.752 { 00:08:19.752 "name": "BaseBdev1", 00:08:19.752 "uuid": "f5969ef0-8d7a-9858-aeb9-feeff9a32c60", 00:08:19.752 "is_configured": true, 00:08:19.752 "data_offset": 2048, 00:08:19.752 "data_size": 63488 00:08:19.752 }, 00:08:19.752 { 00:08:19.752 "name": "BaseBdev2", 00:08:19.752 "uuid": "caf448be-91e5-6f59-b857-7be4aaf33755", 00:08:19.752 "is_configured": true, 00:08:19.752 "data_offset": 2048, 00:08:19.752 "data_size": 63488 00:08:19.752 } 00:08:19.752 ] 00:08:19.752 }' 00:08:19.752 21:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:19.752 21:07:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:20.051 21:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:20.051 21:07:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:20.052 [2024-07-14 21:07:31.556407] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x12b4ceca0ec0 00:08:20.985 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:08:21.243 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:21.243 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:08:21.243 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:21.243 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:21.243 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:21.243 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:21.243 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:21.244 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:21.244 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:21.244 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:21.244 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:21.244 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:21.244 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:21.244 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:21.244 21:07:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:21.502 21:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:21.502 "name": "raid_bdev1", 00:08:21.502 "uuid": "153aa099-4225-11ef-aa83-81fbc7dfef58", 00:08:21.502 "strip_size_kb": 64, 00:08:21.502 "state": "online", 00:08:21.502 "raid_level": "concat", 00:08:21.502 "superblock": true, 00:08:21.502 "num_base_bdevs": 2, 00:08:21.502 "num_base_bdevs_discovered": 2, 00:08:21.502 "num_base_bdevs_operational": 2, 00:08:21.502 "base_bdevs_list": [ 00:08:21.502 { 00:08:21.502 "name": "BaseBdev1", 00:08:21.502 "uuid": "f5969ef0-8d7a-9858-aeb9-feeff9a32c60", 00:08:21.502 "is_configured": true, 00:08:21.502 "data_offset": 2048, 00:08:21.502 "data_size": 63488 00:08:21.502 }, 00:08:21.502 { 00:08:21.502 "name": "BaseBdev2", 00:08:21.502 "uuid": "caf448be-91e5-6f59-b857-7be4aaf33755", 00:08:21.502 "is_configured": true, 00:08:21.502 "data_offset": 2048, 00:08:21.502 "data_size": 63488 00:08:21.502 } 00:08:21.502 ] 00:08:21.502 }' 00:08:21.502 21:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:21.502 21:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.066 21:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:22.066 [2024-07-14 21:07:33.606075] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:22.066 [2024-07-14 21:07:33.606110] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:22.066 [2024-07-14 21:07:33.606528] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:22.066 [2024-07-14 21:07:33.606538] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:22.066 [2024-07-14 21:07:33.606544] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:22.066 [2024-07-14 21:07:33.606549] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x12b4cec34f00 name raid_bdev1, state offline 00:08:22.066 0 00:08:22.324 21:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50485 00:08:22.324 21:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 50485 ']' 00:08:22.324 21:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 50485 00:08:22.324 21:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:08:22.324 21:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:22.324 21:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 50485 00:08:22.324 21:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:08:22.324 21:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:22.324 21:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:22.324 21:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50485' 00:08:22.324 killing process with pid 50485 00:08:22.324 21:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 50485 00:08:22.324 [2024-07-14 21:07:33.632875] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:22.324 21:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 50485 00:08:22.324 [2024-07-14 21:07:33.645752] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:22.583 21:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.Mx6qhotySZ 00:08:22.583 21:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:22.583 21:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:22.583 21:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:08:22.583 21:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:08:22.583 21:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:22.583 21:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:22.583 21:07:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:08:22.583 00:08:22.583 real 0m5.766s 00:08:22.583 user 0m8.715s 00:08:22.583 sys 0m1.018s 00:08:22.583 ************************************ 00:08:22.583 END TEST raid_read_error_test 00:08:22.583 ************************************ 00:08:22.583 21:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:22.583 21:07:33 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.583 21:07:33 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:22.583 21:07:33 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 2 write 00:08:22.583 21:07:33 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:22.583 21:07:33 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:22.583 21:07:33 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:22.583 ************************************ 00:08:22.583 START TEST raid_write_error_test 00:08:22.583 ************************************ 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 2 write 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.3d55YuGdEW 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=50609 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 50609 /var/tmp/spdk-raid.sock 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 50609 ']' 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:08:22.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:22.583 21:07:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:22.583 [2024-07-14 21:07:33.958274] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:22.583 [2024-07-14 21:07:33.958431] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:23.149 EAL: TSC is not safe to use in SMP mode 00:08:23.149 EAL: TSC is not invariant 00:08:23.149 [2024-07-14 21:07:34.517074] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:23.149 [2024-07-14 21:07:34.619425] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:23.149 [2024-07-14 21:07:34.621967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:23.149 [2024-07-14 21:07:34.622950] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.149 [2024-07-14 21:07:34.622965] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:23.408 21:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:23.408 21:07:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:08:23.408 21:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:23.408 21:07:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:08:23.665 BaseBdev1_malloc 00:08:23.923 21:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:08:23.923 true 00:08:23.923 21:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:08:24.180 [2024-07-14 21:07:35.696955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:08:24.180 [2024-07-14 21:07:35.697028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.180 [2024-07-14 21:07:35.697054] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbf6a8834780 00:08:24.180 [2024-07-14 21:07:35.697062] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.180 [2024-07-14 21:07:35.697742] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.180 [2024-07-14 21:07:35.697770] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:08:24.180 BaseBdev1 00:08:24.180 21:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:08:24.181 21:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:08:24.438 BaseBdev2_malloc 00:08:24.732 21:07:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:08:24.732 true 00:08:24.732 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:08:24.993 [2024-07-14 21:07:36.420962] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:08:24.993 [2024-07-14 21:07:36.421021] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:24.993 [2024-07-14 21:07:36.421063] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xbf6a8834c80 00:08:24.993 [2024-07-14 21:07:36.421071] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:24.993 [2024-07-14 21:07:36.421760] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:24.993 [2024-07-14 21:07:36.421786] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:08:24.993 BaseBdev2 00:08:24.993 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:08:25.252 [2024-07-14 21:07:36.616981] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:25.252 [2024-07-14 21:07:36.617596] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:25.252 [2024-07-14 21:07:36.617653] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xbf6a8834f00 00:08:25.252 [2024-07-14 21:07:36.617659] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 126976, blocklen 512 00:08:25.252 [2024-07-14 21:07:36.617689] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbf6a88a0e20 00:08:25.252 [2024-07-14 21:07:36.617817] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xbf6a8834f00 00:08:25.252 [2024-07-14 21:07:36.617821] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xbf6a8834f00 00:08:25.252 [2024-07-14 21:07:36.617846] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:25.252 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:25.252 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:25.252 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:25.252 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:25.252 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:25.252 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:25.252 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:25.252 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:25.252 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:25.252 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:25.252 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:25.252 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:25.511 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:25.511 "name": "raid_bdev1", 00:08:25.511 "uuid": "18a0808b-4225-11ef-aa83-81fbc7dfef58", 00:08:25.511 "strip_size_kb": 64, 00:08:25.511 "state": "online", 00:08:25.511 "raid_level": "concat", 00:08:25.511 "superblock": true, 00:08:25.511 "num_base_bdevs": 2, 00:08:25.511 "num_base_bdevs_discovered": 2, 00:08:25.511 "num_base_bdevs_operational": 2, 00:08:25.511 "base_bdevs_list": [ 00:08:25.511 { 00:08:25.511 "name": "BaseBdev1", 00:08:25.511 "uuid": "975d39cd-5cec-9e56-8f79-90ee1a67962f", 00:08:25.511 "is_configured": true, 00:08:25.511 "data_offset": 2048, 00:08:25.511 "data_size": 63488 00:08:25.511 }, 00:08:25.511 { 00:08:25.511 "name": "BaseBdev2", 00:08:25.511 "uuid": "0ebe1d98-6959-285b-a008-c0b5464f1c26", 00:08:25.511 "is_configured": true, 00:08:25.511 "data_offset": 2048, 00:08:25.511 "data_size": 63488 00:08:25.511 } 00:08:25.511 ] 00:08:25.511 }' 00:08:25.511 21:07:36 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:25.511 21:07:36 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:25.769 21:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:08:25.769 21:07:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:08:26.028 [2024-07-14 21:07:37.353171] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xbf6a88a0ec0 00:08:26.965 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:08:27.225 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:08:27.225 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:08:27.225 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:08:27.225 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 2 00:08:27.225 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:27.225 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:27.225 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:08:27.225 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:08:27.225 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:27.225 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:27.225 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:27.225 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:27.225 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:27.225 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:27.225 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:27.484 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:27.484 "name": "raid_bdev1", 00:08:27.484 "uuid": "18a0808b-4225-11ef-aa83-81fbc7dfef58", 00:08:27.484 "strip_size_kb": 64, 00:08:27.484 "state": "online", 00:08:27.484 "raid_level": "concat", 00:08:27.484 "superblock": true, 00:08:27.484 "num_base_bdevs": 2, 00:08:27.484 "num_base_bdevs_discovered": 2, 00:08:27.484 "num_base_bdevs_operational": 2, 00:08:27.484 "base_bdevs_list": [ 00:08:27.484 { 00:08:27.484 "name": "BaseBdev1", 00:08:27.484 "uuid": "975d39cd-5cec-9e56-8f79-90ee1a67962f", 00:08:27.484 "is_configured": true, 00:08:27.484 "data_offset": 2048, 00:08:27.484 "data_size": 63488 00:08:27.484 }, 00:08:27.484 { 00:08:27.484 "name": "BaseBdev2", 00:08:27.484 "uuid": "0ebe1d98-6959-285b-a008-c0b5464f1c26", 00:08:27.484 "is_configured": true, 00:08:27.484 "data_offset": 2048, 00:08:27.484 "data_size": 63488 00:08:27.484 } 00:08:27.484 ] 00:08:27.484 }' 00:08:27.484 21:07:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:27.484 21:07:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:27.743 21:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:28.002 [2024-07-14 21:07:39.426086] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:28.002 [2024-07-14 21:07:39.426113] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:28.002 [2024-07-14 21:07:39.426447] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:28.002 [2024-07-14 21:07:39.426456] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:28.002 [2024-07-14 21:07:39.426462] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:28.002 [2024-07-14 21:07:39.426466] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xbf6a8834f00 name raid_bdev1, state offline 00:08:28.002 0 00:08:28.002 21:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 50609 00:08:28.002 21:07:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 50609 ']' 00:08:28.002 21:07:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 50609 00:08:28.002 21:07:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:08:28.002 21:07:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:28.002 21:07:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 50609 00:08:28.002 21:07:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:08:28.002 21:07:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:08:28.002 21:07:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:08:28.002 21:07:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50609' 00:08:28.002 killing process with pid 50609 00:08:28.002 21:07:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 50609 00:08:28.002 [2024-07-14 21:07:39.455349] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:28.002 21:07:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 50609 00:08:28.002 [2024-07-14 21:07:39.467057] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:28.262 21:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.3d55YuGdEW 00:08:28.262 21:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:08:28.262 21:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:08:28.262 21:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:08:28.262 21:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:08:28.262 21:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:28.262 21:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:08:28.262 21:07:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:08:28.262 00:08:28.262 real 0m5.721s 00:08:28.262 user 0m8.755s 00:08:28.262 sys 0m1.039s 00:08:28.262 21:07:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:28.262 21:07:39 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.262 ************************************ 00:08:28.262 END TEST raid_write_error_test 00:08:28.262 ************************************ 00:08:28.262 21:07:39 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:28.262 21:07:39 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:08:28.262 21:07:39 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 2 false 00:08:28.262 21:07:39 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:28.262 21:07:39 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:28.262 21:07:39 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:28.262 ************************************ 00:08:28.262 START TEST raid_state_function_test 00:08:28.262 ************************************ 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 false 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=50735 00:08:28.262 Process raid pid: 50735 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 50735' 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 50735 /var/tmp/spdk-raid.sock 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 50735 ']' 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:28.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:28.262 21:07:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:28.262 [2024-07-14 21:07:39.728266] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:28.263 [2024-07-14 21:07:39.728522] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:28.831 EAL: TSC is not safe to use in SMP mode 00:08:28.831 EAL: TSC is not invariant 00:08:28.831 [2024-07-14 21:07:40.271464] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:28.831 [2024-07-14 21:07:40.368233] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:28.831 [2024-07-14 21:07:40.370945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:28.831 [2024-07-14 21:07:40.371862] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:28.831 [2024-07-14 21:07:40.371877] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:29.399 21:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:29.399 21:07:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:08:29.399 21:07:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:29.657 [2024-07-14 21:07:41.067129] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:29.657 [2024-07-14 21:07:41.067175] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:29.657 [2024-07-14 21:07:41.067180] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:29.657 [2024-07-14 21:07:41.067204] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:29.657 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:29.657 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:29.657 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:29.657 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:29.657 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:29.657 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:29.657 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:29.657 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:29.657 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:29.657 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:29.657 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:29.657 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:29.916 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:29.916 "name": "Existed_Raid", 00:08:29.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.916 "strip_size_kb": 0, 00:08:29.916 "state": "configuring", 00:08:29.916 "raid_level": "raid1", 00:08:29.916 "superblock": false, 00:08:29.916 "num_base_bdevs": 2, 00:08:29.916 "num_base_bdevs_discovered": 0, 00:08:29.916 "num_base_bdevs_operational": 2, 00:08:29.916 "base_bdevs_list": [ 00:08:29.916 { 00:08:29.916 "name": "BaseBdev1", 00:08:29.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.916 "is_configured": false, 00:08:29.916 "data_offset": 0, 00:08:29.916 "data_size": 0 00:08:29.916 }, 00:08:29.916 { 00:08:29.916 "name": "BaseBdev2", 00:08:29.916 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:29.916 "is_configured": false, 00:08:29.916 "data_offset": 0, 00:08:29.916 "data_size": 0 00:08:29.916 } 00:08:29.916 ] 00:08:29.916 }' 00:08:29.916 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:29.916 21:07:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:30.174 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:30.432 [2024-07-14 21:07:41.887108] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:30.432 [2024-07-14 21:07:41.887129] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xce540a34500 name Existed_Raid, state configuring 00:08:30.432 21:07:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:30.691 [2024-07-14 21:07:42.159103] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:30.691 [2024-07-14 21:07:42.159157] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:30.691 [2024-07-14 21:07:42.159161] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:30.691 [2024-07-14 21:07:42.159185] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:30.691 21:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:30.950 [2024-07-14 21:07:42.420099] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:30.950 BaseBdev1 00:08:30.950 21:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:30.950 21:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:30.950 21:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:30.950 21:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:30.950 21:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:30.950 21:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:30.950 21:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:31.209 21:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:31.468 [ 00:08:31.468 { 00:08:31.468 "name": "BaseBdev1", 00:08:31.468 "aliases": [ 00:08:31.468 "1c15d770-4225-11ef-aa83-81fbc7dfef58" 00:08:31.468 ], 00:08:31.468 "product_name": "Malloc disk", 00:08:31.468 "block_size": 512, 00:08:31.468 "num_blocks": 65536, 00:08:31.468 "uuid": "1c15d770-4225-11ef-aa83-81fbc7dfef58", 00:08:31.468 "assigned_rate_limits": { 00:08:31.468 "rw_ios_per_sec": 0, 00:08:31.468 "rw_mbytes_per_sec": 0, 00:08:31.468 "r_mbytes_per_sec": 0, 00:08:31.468 "w_mbytes_per_sec": 0 00:08:31.468 }, 00:08:31.468 "claimed": true, 00:08:31.468 "claim_type": "exclusive_write", 00:08:31.468 "zoned": false, 00:08:31.468 "supported_io_types": { 00:08:31.468 "read": true, 00:08:31.468 "write": true, 00:08:31.468 "unmap": true, 00:08:31.468 "flush": true, 00:08:31.468 "reset": true, 00:08:31.468 "nvme_admin": false, 00:08:31.468 "nvme_io": false, 00:08:31.468 "nvme_io_md": false, 00:08:31.468 "write_zeroes": true, 00:08:31.468 "zcopy": true, 00:08:31.468 "get_zone_info": false, 00:08:31.468 "zone_management": false, 00:08:31.468 "zone_append": false, 00:08:31.468 "compare": false, 00:08:31.468 "compare_and_write": false, 00:08:31.468 "abort": true, 00:08:31.468 "seek_hole": false, 00:08:31.468 "seek_data": false, 00:08:31.468 "copy": true, 00:08:31.468 "nvme_iov_md": false 00:08:31.468 }, 00:08:31.468 "memory_domains": [ 00:08:31.468 { 00:08:31.468 "dma_device_id": "system", 00:08:31.468 "dma_device_type": 1 00:08:31.468 }, 00:08:31.468 { 00:08:31.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:31.468 "dma_device_type": 2 00:08:31.468 } 00:08:31.468 ], 00:08:31.468 "driver_specific": {} 00:08:31.468 } 00:08:31.468 ] 00:08:31.468 21:07:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:31.468 21:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:31.468 21:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:31.468 21:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:31.468 21:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:31.468 21:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:31.468 21:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:31.468 21:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:31.468 21:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:31.468 21:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:31.468 21:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:31.468 21:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:31.468 21:07:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:31.727 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:31.727 "name": "Existed_Raid", 00:08:31.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.727 "strip_size_kb": 0, 00:08:31.727 "state": "configuring", 00:08:31.727 "raid_level": "raid1", 00:08:31.727 "superblock": false, 00:08:31.727 "num_base_bdevs": 2, 00:08:31.727 "num_base_bdevs_discovered": 1, 00:08:31.727 "num_base_bdevs_operational": 2, 00:08:31.727 "base_bdevs_list": [ 00:08:31.727 { 00:08:31.727 "name": "BaseBdev1", 00:08:31.727 "uuid": "1c15d770-4225-11ef-aa83-81fbc7dfef58", 00:08:31.727 "is_configured": true, 00:08:31.727 "data_offset": 0, 00:08:31.727 "data_size": 65536 00:08:31.727 }, 00:08:31.727 { 00:08:31.727 "name": "BaseBdev2", 00:08:31.727 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:31.727 "is_configured": false, 00:08:31.727 "data_offset": 0, 00:08:31.727 "data_size": 0 00:08:31.727 } 00:08:31.727 ] 00:08:31.727 }' 00:08:31.727 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:31.727 21:07:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:31.985 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:32.243 [2024-07-14 21:07:43.691244] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:32.243 [2024-07-14 21:07:43.691291] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xce540a34500 name Existed_Raid, state configuring 00:08:32.243 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:32.516 [2024-07-14 21:07:43.971257] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:32.516 [2024-07-14 21:07:43.972258] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:32.516 [2024-07-14 21:07:43.972306] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:32.516 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:32.516 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:32.516 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:32.516 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:32.516 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:32.516 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:32.516 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:32.516 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:32.516 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:32.516 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:32.516 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:32.516 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:32.516 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:32.516 21:07:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:32.788 21:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:32.788 "name": "Existed_Raid", 00:08:32.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.788 "strip_size_kb": 0, 00:08:32.788 "state": "configuring", 00:08:32.788 "raid_level": "raid1", 00:08:32.788 "superblock": false, 00:08:32.788 "num_base_bdevs": 2, 00:08:32.788 "num_base_bdevs_discovered": 1, 00:08:32.788 "num_base_bdevs_operational": 2, 00:08:32.788 "base_bdevs_list": [ 00:08:32.788 { 00:08:32.788 "name": "BaseBdev1", 00:08:32.788 "uuid": "1c15d770-4225-11ef-aa83-81fbc7dfef58", 00:08:32.788 "is_configured": true, 00:08:32.788 "data_offset": 0, 00:08:32.788 "data_size": 65536 00:08:32.788 }, 00:08:32.788 { 00:08:32.788 "name": "BaseBdev2", 00:08:32.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:32.788 "is_configured": false, 00:08:32.788 "data_offset": 0, 00:08:32.788 "data_size": 0 00:08:32.788 } 00:08:32.788 ] 00:08:32.788 }' 00:08:32.788 21:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:32.788 21:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:33.046 21:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:33.304 [2024-07-14 21:07:44.827424] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:33.304 [2024-07-14 21:07:44.827451] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xce540a34a00 00:08:33.304 [2024-07-14 21:07:44.827471] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:08:33.304 [2024-07-14 21:07:44.827491] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xce540a97e20 00:08:33.304 [2024-07-14 21:07:44.827576] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xce540a34a00 00:08:33.304 [2024-07-14 21:07:44.827581] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xce540a34a00 00:08:33.304 [2024-07-14 21:07:44.827613] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:33.304 BaseBdev2 00:08:33.304 21:07:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:33.304 21:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:33.304 21:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:33.304 21:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:08:33.304 21:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:33.304 21:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:33.304 21:07:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:33.563 21:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:33.821 [ 00:08:33.821 { 00:08:33.821 "name": "BaseBdev2", 00:08:33.821 "aliases": [ 00:08:33.821 "1d854bb1-4225-11ef-aa83-81fbc7dfef58" 00:08:33.821 ], 00:08:33.822 "product_name": "Malloc disk", 00:08:33.822 "block_size": 512, 00:08:33.822 "num_blocks": 65536, 00:08:33.822 "uuid": "1d854bb1-4225-11ef-aa83-81fbc7dfef58", 00:08:33.822 "assigned_rate_limits": { 00:08:33.822 "rw_ios_per_sec": 0, 00:08:33.822 "rw_mbytes_per_sec": 0, 00:08:33.822 "r_mbytes_per_sec": 0, 00:08:33.822 "w_mbytes_per_sec": 0 00:08:33.822 }, 00:08:33.822 "claimed": true, 00:08:33.822 "claim_type": "exclusive_write", 00:08:33.822 "zoned": false, 00:08:33.822 "supported_io_types": { 00:08:33.822 "read": true, 00:08:33.822 "write": true, 00:08:33.822 "unmap": true, 00:08:33.822 "flush": true, 00:08:33.822 "reset": true, 00:08:33.822 "nvme_admin": false, 00:08:33.822 "nvme_io": false, 00:08:33.822 "nvme_io_md": false, 00:08:33.822 "write_zeroes": true, 00:08:33.822 "zcopy": true, 00:08:33.822 "get_zone_info": false, 00:08:33.822 "zone_management": false, 00:08:33.822 "zone_append": false, 00:08:33.822 "compare": false, 00:08:33.822 "compare_and_write": false, 00:08:33.822 "abort": true, 00:08:33.822 "seek_hole": false, 00:08:33.822 "seek_data": false, 00:08:33.822 "copy": true, 00:08:33.822 "nvme_iov_md": false 00:08:33.822 }, 00:08:33.822 "memory_domains": [ 00:08:33.822 { 00:08:33.822 "dma_device_id": "system", 00:08:33.822 "dma_device_type": 1 00:08:33.822 }, 00:08:33.822 { 00:08:33.822 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:33.822 "dma_device_type": 2 00:08:33.822 } 00:08:33.822 ], 00:08:33.822 "driver_specific": {} 00:08:33.822 } 00:08:33.822 ] 00:08:34.080 21:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:08:34.080 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:34.080 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:34.080 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:34.080 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:34.081 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:34.081 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:34.081 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:34.081 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:34.081 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:34.081 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:34.081 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:34.081 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:34.081 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:34.081 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:34.339 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:34.339 "name": "Existed_Raid", 00:08:34.339 "uuid": "1d855349-4225-11ef-aa83-81fbc7dfef58", 00:08:34.339 "strip_size_kb": 0, 00:08:34.339 "state": "online", 00:08:34.339 "raid_level": "raid1", 00:08:34.339 "superblock": false, 00:08:34.339 "num_base_bdevs": 2, 00:08:34.339 "num_base_bdevs_discovered": 2, 00:08:34.339 "num_base_bdevs_operational": 2, 00:08:34.339 "base_bdevs_list": [ 00:08:34.339 { 00:08:34.339 "name": "BaseBdev1", 00:08:34.339 "uuid": "1c15d770-4225-11ef-aa83-81fbc7dfef58", 00:08:34.339 "is_configured": true, 00:08:34.339 "data_offset": 0, 00:08:34.339 "data_size": 65536 00:08:34.339 }, 00:08:34.339 { 00:08:34.339 "name": "BaseBdev2", 00:08:34.339 "uuid": "1d854bb1-4225-11ef-aa83-81fbc7dfef58", 00:08:34.339 "is_configured": true, 00:08:34.339 "data_offset": 0, 00:08:34.339 "data_size": 65536 00:08:34.339 } 00:08:34.339 ] 00:08:34.339 }' 00:08:34.339 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:34.339 21:07:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:34.598 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:34.598 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:34.598 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:34.598 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:34.598 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:34.598 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:34.598 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:34.598 21:07:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:34.856 [2024-07-14 21:07:46.227462] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:34.856 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:34.856 "name": "Existed_Raid", 00:08:34.856 "aliases": [ 00:08:34.856 "1d855349-4225-11ef-aa83-81fbc7dfef58" 00:08:34.856 ], 00:08:34.856 "product_name": "Raid Volume", 00:08:34.856 "block_size": 512, 00:08:34.856 "num_blocks": 65536, 00:08:34.856 "uuid": "1d855349-4225-11ef-aa83-81fbc7dfef58", 00:08:34.856 "assigned_rate_limits": { 00:08:34.856 "rw_ios_per_sec": 0, 00:08:34.856 "rw_mbytes_per_sec": 0, 00:08:34.856 "r_mbytes_per_sec": 0, 00:08:34.856 "w_mbytes_per_sec": 0 00:08:34.856 }, 00:08:34.856 "claimed": false, 00:08:34.856 "zoned": false, 00:08:34.856 "supported_io_types": { 00:08:34.856 "read": true, 00:08:34.856 "write": true, 00:08:34.856 "unmap": false, 00:08:34.856 "flush": false, 00:08:34.856 "reset": true, 00:08:34.856 "nvme_admin": false, 00:08:34.856 "nvme_io": false, 00:08:34.856 "nvme_io_md": false, 00:08:34.856 "write_zeroes": true, 00:08:34.856 "zcopy": false, 00:08:34.856 "get_zone_info": false, 00:08:34.856 "zone_management": false, 00:08:34.856 "zone_append": false, 00:08:34.856 "compare": false, 00:08:34.856 "compare_and_write": false, 00:08:34.856 "abort": false, 00:08:34.856 "seek_hole": false, 00:08:34.856 "seek_data": false, 00:08:34.856 "copy": false, 00:08:34.856 "nvme_iov_md": false 00:08:34.856 }, 00:08:34.856 "memory_domains": [ 00:08:34.856 { 00:08:34.856 "dma_device_id": "system", 00:08:34.856 "dma_device_type": 1 00:08:34.856 }, 00:08:34.856 { 00:08:34.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.856 "dma_device_type": 2 00:08:34.856 }, 00:08:34.856 { 00:08:34.856 "dma_device_id": "system", 00:08:34.856 "dma_device_type": 1 00:08:34.856 }, 00:08:34.856 { 00:08:34.856 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:34.856 "dma_device_type": 2 00:08:34.856 } 00:08:34.856 ], 00:08:34.856 "driver_specific": { 00:08:34.856 "raid": { 00:08:34.856 "uuid": "1d855349-4225-11ef-aa83-81fbc7dfef58", 00:08:34.856 "strip_size_kb": 0, 00:08:34.856 "state": "online", 00:08:34.856 "raid_level": "raid1", 00:08:34.856 "superblock": false, 00:08:34.856 "num_base_bdevs": 2, 00:08:34.856 "num_base_bdevs_discovered": 2, 00:08:34.856 "num_base_bdevs_operational": 2, 00:08:34.856 "base_bdevs_list": [ 00:08:34.856 { 00:08:34.856 "name": "BaseBdev1", 00:08:34.856 "uuid": "1c15d770-4225-11ef-aa83-81fbc7dfef58", 00:08:34.856 "is_configured": true, 00:08:34.856 "data_offset": 0, 00:08:34.856 "data_size": 65536 00:08:34.856 }, 00:08:34.856 { 00:08:34.856 "name": "BaseBdev2", 00:08:34.856 "uuid": "1d854bb1-4225-11ef-aa83-81fbc7dfef58", 00:08:34.856 "is_configured": true, 00:08:34.856 "data_offset": 0, 00:08:34.856 "data_size": 65536 00:08:34.856 } 00:08:34.856 ] 00:08:34.856 } 00:08:34.856 } 00:08:34.856 }' 00:08:34.856 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:34.856 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:34.856 BaseBdev2' 00:08:34.856 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:34.856 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:34.856 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:35.114 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:35.114 "name": "BaseBdev1", 00:08:35.114 "aliases": [ 00:08:35.114 "1c15d770-4225-11ef-aa83-81fbc7dfef58" 00:08:35.114 ], 00:08:35.114 "product_name": "Malloc disk", 00:08:35.114 "block_size": 512, 00:08:35.114 "num_blocks": 65536, 00:08:35.114 "uuid": "1c15d770-4225-11ef-aa83-81fbc7dfef58", 00:08:35.114 "assigned_rate_limits": { 00:08:35.114 "rw_ios_per_sec": 0, 00:08:35.114 "rw_mbytes_per_sec": 0, 00:08:35.114 "r_mbytes_per_sec": 0, 00:08:35.114 "w_mbytes_per_sec": 0 00:08:35.114 }, 00:08:35.114 "claimed": true, 00:08:35.114 "claim_type": "exclusive_write", 00:08:35.114 "zoned": false, 00:08:35.114 "supported_io_types": { 00:08:35.114 "read": true, 00:08:35.114 "write": true, 00:08:35.115 "unmap": true, 00:08:35.115 "flush": true, 00:08:35.115 "reset": true, 00:08:35.115 "nvme_admin": false, 00:08:35.115 "nvme_io": false, 00:08:35.115 "nvme_io_md": false, 00:08:35.115 "write_zeroes": true, 00:08:35.115 "zcopy": true, 00:08:35.115 "get_zone_info": false, 00:08:35.115 "zone_management": false, 00:08:35.115 "zone_append": false, 00:08:35.115 "compare": false, 00:08:35.115 "compare_and_write": false, 00:08:35.115 "abort": true, 00:08:35.115 "seek_hole": false, 00:08:35.115 "seek_data": false, 00:08:35.115 "copy": true, 00:08:35.115 "nvme_iov_md": false 00:08:35.115 }, 00:08:35.115 "memory_domains": [ 00:08:35.115 { 00:08:35.115 "dma_device_id": "system", 00:08:35.115 "dma_device_type": 1 00:08:35.115 }, 00:08:35.115 { 00:08:35.115 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.115 "dma_device_type": 2 00:08:35.115 } 00:08:35.115 ], 00:08:35.115 "driver_specific": {} 00:08:35.115 }' 00:08:35.115 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:35.115 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:35.115 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:35.115 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:35.115 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:35.115 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:35.115 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:35.115 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:35.115 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:35.115 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:35.115 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:35.115 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:35.115 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:35.115 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:35.115 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:35.373 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:35.373 "name": "BaseBdev2", 00:08:35.373 "aliases": [ 00:08:35.373 "1d854bb1-4225-11ef-aa83-81fbc7dfef58" 00:08:35.373 ], 00:08:35.373 "product_name": "Malloc disk", 00:08:35.373 "block_size": 512, 00:08:35.373 "num_blocks": 65536, 00:08:35.373 "uuid": "1d854bb1-4225-11ef-aa83-81fbc7dfef58", 00:08:35.373 "assigned_rate_limits": { 00:08:35.373 "rw_ios_per_sec": 0, 00:08:35.373 "rw_mbytes_per_sec": 0, 00:08:35.373 "r_mbytes_per_sec": 0, 00:08:35.373 "w_mbytes_per_sec": 0 00:08:35.373 }, 00:08:35.373 "claimed": true, 00:08:35.373 "claim_type": "exclusive_write", 00:08:35.373 "zoned": false, 00:08:35.373 "supported_io_types": { 00:08:35.373 "read": true, 00:08:35.373 "write": true, 00:08:35.373 "unmap": true, 00:08:35.373 "flush": true, 00:08:35.373 "reset": true, 00:08:35.373 "nvme_admin": false, 00:08:35.373 "nvme_io": false, 00:08:35.373 "nvme_io_md": false, 00:08:35.373 "write_zeroes": true, 00:08:35.373 "zcopy": true, 00:08:35.373 "get_zone_info": false, 00:08:35.373 "zone_management": false, 00:08:35.373 "zone_append": false, 00:08:35.373 "compare": false, 00:08:35.373 "compare_and_write": false, 00:08:35.373 "abort": true, 00:08:35.373 "seek_hole": false, 00:08:35.373 "seek_data": false, 00:08:35.373 "copy": true, 00:08:35.373 "nvme_iov_md": false 00:08:35.373 }, 00:08:35.373 "memory_domains": [ 00:08:35.373 { 00:08:35.373 "dma_device_id": "system", 00:08:35.373 "dma_device_type": 1 00:08:35.373 }, 00:08:35.373 { 00:08:35.373 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:35.373 "dma_device_type": 2 00:08:35.373 } 00:08:35.373 ], 00:08:35.373 "driver_specific": {} 00:08:35.373 }' 00:08:35.373 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:35.373 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:35.373 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:35.373 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:35.373 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:35.373 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:35.373 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:35.373 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:35.373 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:35.373 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:35.373 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:35.373 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:35.373 21:07:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:36.001 [2024-07-14 21:07:47.195584] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:36.001 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:36.001 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:36.001 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:36.001 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:36.001 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:36.001 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:36.001 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:36.001 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:36.001 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:36.001 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:36.002 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:36.002 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:36.002 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:36.002 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:36.002 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:36.002 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:36.002 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:36.002 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:36.002 "name": "Existed_Raid", 00:08:36.002 "uuid": "1d855349-4225-11ef-aa83-81fbc7dfef58", 00:08:36.002 "strip_size_kb": 0, 00:08:36.002 "state": "online", 00:08:36.002 "raid_level": "raid1", 00:08:36.002 "superblock": false, 00:08:36.002 "num_base_bdevs": 2, 00:08:36.002 "num_base_bdevs_discovered": 1, 00:08:36.002 "num_base_bdevs_operational": 1, 00:08:36.002 "base_bdevs_list": [ 00:08:36.002 { 00:08:36.002 "name": null, 00:08:36.002 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:36.002 "is_configured": false, 00:08:36.002 "data_offset": 0, 00:08:36.002 "data_size": 65536 00:08:36.002 }, 00:08:36.002 { 00:08:36.002 "name": "BaseBdev2", 00:08:36.002 "uuid": "1d854bb1-4225-11ef-aa83-81fbc7dfef58", 00:08:36.002 "is_configured": true, 00:08:36.002 "data_offset": 0, 00:08:36.002 "data_size": 65536 00:08:36.002 } 00:08:36.002 ] 00:08:36.002 }' 00:08:36.002 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:36.002 21:07:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:36.571 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:36.571 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:36.571 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:36.571 21:07:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:36.829 21:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:36.829 21:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:36.829 21:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:37.087 [2024-07-14 21:07:48.458633] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:37.087 [2024-07-14 21:07:48.458685] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:37.087 [2024-07-14 21:07:48.465461] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:37.087 [2024-07-14 21:07:48.465480] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:37.087 [2024-07-14 21:07:48.465484] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xce540a34a00 name Existed_Raid, state offline 00:08:37.087 21:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:37.087 21:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:37.087 21:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:37.087 21:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:37.345 21:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:37.345 21:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:37.345 21:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:37.345 21:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 50735 00:08:37.345 21:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 50735 ']' 00:08:37.345 21:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 50735 00:08:37.345 21:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:08:37.345 21:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:37.345 21:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:08:37.345 21:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 50735 00:08:37.345 21:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:37.345 killing process with pid 50735 00:08:37.345 21:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:37.345 21:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 50735' 00:08:37.345 21:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 50735 00:08:37.345 [2024-07-14 21:07:48.778172] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:37.346 [2024-07-14 21:07:48.778206] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:37.346 21:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 50735 00:08:37.604 21:07:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:08:37.604 00:08:37.604 real 0m9.263s 00:08:37.604 user 0m16.110s 00:08:37.604 sys 0m1.629s 00:08:37.604 ************************************ 00:08:37.604 END TEST raid_state_function_test 00:08:37.604 ************************************ 00:08:37.604 21:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:37.604 21:07:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:08:37.604 21:07:49 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:37.604 21:07:49 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 2 true 00:08:37.604 21:07:49 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:37.604 21:07:49 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:37.604 21:07:49 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:37.604 ************************************ 00:08:37.604 START TEST raid_state_function_test_sb 00:08:37.604 ************************************ 00:08:37.604 21:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:08:37.604 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:08:37.604 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:08:37.604 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:08:37.604 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:08:37.604 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:08:37.604 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:37.604 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:08:37.604 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:37.604 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:37.604 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:08:37.604 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=51006 00:08:37.605 Process raid pid: 51006 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51006' 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 51006 /var/tmp/spdk-raid.sock 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 51006 ']' 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:37.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:37.605 21:07:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:37.605 [2024-07-14 21:07:49.043705] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:37.605 [2024-07-14 21:07:49.043977] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:38.170 EAL: TSC is not safe to use in SMP mode 00:08:38.170 EAL: TSC is not invariant 00:08:38.170 [2024-07-14 21:07:49.626365] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:38.428 [2024-07-14 21:07:49.720781] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:38.428 [2024-07-14 21:07:49.723286] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:38.428 [2024-07-14 21:07:49.724222] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.428 [2024-07-14 21:07:49.724237] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:38.686 21:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:38.686 21:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:08:38.686 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:38.945 [2024-07-14 21:07:50.306370] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:38.945 [2024-07-14 21:07:50.306434] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:38.945 [2024-07-14 21:07:50.306438] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:38.945 [2024-07-14 21:07:50.306446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:38.945 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:38.945 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:38.945 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:38.945 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:38.945 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:38.945 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:38.945 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:38.945 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:38.945 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:38.945 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:38.945 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:38.945 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:39.203 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:39.203 "name": "Existed_Raid", 00:08:39.203 "uuid": "20c9564d-4225-11ef-aa83-81fbc7dfef58", 00:08:39.203 "strip_size_kb": 0, 00:08:39.203 "state": "configuring", 00:08:39.203 "raid_level": "raid1", 00:08:39.203 "superblock": true, 00:08:39.203 "num_base_bdevs": 2, 00:08:39.203 "num_base_bdevs_discovered": 0, 00:08:39.203 "num_base_bdevs_operational": 2, 00:08:39.203 "base_bdevs_list": [ 00:08:39.203 { 00:08:39.203 "name": "BaseBdev1", 00:08:39.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.203 "is_configured": false, 00:08:39.203 "data_offset": 0, 00:08:39.203 "data_size": 0 00:08:39.203 }, 00:08:39.203 { 00:08:39.203 "name": "BaseBdev2", 00:08:39.203 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:39.203 "is_configured": false, 00:08:39.203 "data_offset": 0, 00:08:39.203 "data_size": 0 00:08:39.203 } 00:08:39.203 ] 00:08:39.203 }' 00:08:39.203 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:39.203 21:07:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:39.460 21:07:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:39.717 [2024-07-14 21:07:51.198358] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:39.717 [2024-07-14 21:07:51.198384] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d70d1434500 name Existed_Raid, state configuring 00:08:39.717 21:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:39.974 [2024-07-14 21:07:51.498387] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:08:39.974 [2024-07-14 21:07:51.498446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:08:39.974 [2024-07-14 21:07:51.498450] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:39.974 [2024-07-14 21:07:51.498458] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:39.974 21:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:08:40.233 [2024-07-14 21:07:51.739494] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:40.233 BaseBdev1 00:08:40.233 21:07:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:08:40.233 21:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:08:40.233 21:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:40.233 21:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:40.233 21:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:40.233 21:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:40.233 21:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:40.490 21:07:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:08:40.748 [ 00:08:40.748 { 00:08:40.748 "name": "BaseBdev1", 00:08:40.748 "aliases": [ 00:08:40.748 "21a3d8d0-4225-11ef-aa83-81fbc7dfef58" 00:08:40.748 ], 00:08:40.748 "product_name": "Malloc disk", 00:08:40.748 "block_size": 512, 00:08:40.748 "num_blocks": 65536, 00:08:40.748 "uuid": "21a3d8d0-4225-11ef-aa83-81fbc7dfef58", 00:08:40.748 "assigned_rate_limits": { 00:08:40.748 "rw_ios_per_sec": 0, 00:08:40.748 "rw_mbytes_per_sec": 0, 00:08:40.748 "r_mbytes_per_sec": 0, 00:08:40.748 "w_mbytes_per_sec": 0 00:08:40.748 }, 00:08:40.748 "claimed": true, 00:08:40.748 "claim_type": "exclusive_write", 00:08:40.748 "zoned": false, 00:08:40.748 "supported_io_types": { 00:08:40.748 "read": true, 00:08:40.748 "write": true, 00:08:40.748 "unmap": true, 00:08:40.748 "flush": true, 00:08:40.748 "reset": true, 00:08:40.748 "nvme_admin": false, 00:08:40.748 "nvme_io": false, 00:08:40.748 "nvme_io_md": false, 00:08:40.748 "write_zeroes": true, 00:08:40.748 "zcopy": true, 00:08:40.748 "get_zone_info": false, 00:08:40.748 "zone_management": false, 00:08:40.748 "zone_append": false, 00:08:40.748 "compare": false, 00:08:40.748 "compare_and_write": false, 00:08:40.748 "abort": true, 00:08:40.748 "seek_hole": false, 00:08:40.748 "seek_data": false, 00:08:40.748 "copy": true, 00:08:40.748 "nvme_iov_md": false 00:08:40.748 }, 00:08:40.748 "memory_domains": [ 00:08:40.748 { 00:08:40.748 "dma_device_id": "system", 00:08:40.748 "dma_device_type": 1 00:08:40.748 }, 00:08:40.748 { 00:08:40.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:40.748 "dma_device_type": 2 00:08:40.748 } 00:08:40.748 ], 00:08:40.748 "driver_specific": {} 00:08:40.748 } 00:08:40.748 ] 00:08:40.748 21:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:40.748 21:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:40.748 21:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:40.748 21:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:40.748 21:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:40.748 21:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:40.748 21:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:40.748 21:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:40.748 21:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:40.748 21:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:40.748 21:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:40.748 21:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:40.748 21:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:41.007 21:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:41.007 "name": "Existed_Raid", 00:08:41.007 "uuid": "217f3989-4225-11ef-aa83-81fbc7dfef58", 00:08:41.007 "strip_size_kb": 0, 00:08:41.007 "state": "configuring", 00:08:41.007 "raid_level": "raid1", 00:08:41.007 "superblock": true, 00:08:41.007 "num_base_bdevs": 2, 00:08:41.007 "num_base_bdevs_discovered": 1, 00:08:41.007 "num_base_bdevs_operational": 2, 00:08:41.007 "base_bdevs_list": [ 00:08:41.007 { 00:08:41.007 "name": "BaseBdev1", 00:08:41.007 "uuid": "21a3d8d0-4225-11ef-aa83-81fbc7dfef58", 00:08:41.007 "is_configured": true, 00:08:41.007 "data_offset": 2048, 00:08:41.007 "data_size": 63488 00:08:41.007 }, 00:08:41.007 { 00:08:41.007 "name": "BaseBdev2", 00:08:41.007 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:41.007 "is_configured": false, 00:08:41.007 "data_offset": 0, 00:08:41.007 "data_size": 0 00:08:41.007 } 00:08:41.007 ] 00:08:41.007 }' 00:08:41.007 21:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:41.007 21:07:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:41.265 21:07:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:08:41.523 [2024-07-14 21:07:53.046424] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:08:41.523 [2024-07-14 21:07:53.046470] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d70d1434500 name Existed_Raid, state configuring 00:08:41.523 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:08:41.782 [2024-07-14 21:07:53.262457] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:08:41.782 [2024-07-14 21:07:53.263366] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:08:41.782 [2024-07-14 21:07:53.263433] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:08:41.782 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:08:41.782 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:41.782 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:08:41.782 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:41.782 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:41.782 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:41.782 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:41.782 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:41.782 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:41.782 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:41.782 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:41.782 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:41.782 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:41.782 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:42.040 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:42.040 "name": "Existed_Raid", 00:08:42.040 "uuid": "228c6658-4225-11ef-aa83-81fbc7dfef58", 00:08:42.040 "strip_size_kb": 0, 00:08:42.040 "state": "configuring", 00:08:42.040 "raid_level": "raid1", 00:08:42.040 "superblock": true, 00:08:42.040 "num_base_bdevs": 2, 00:08:42.040 "num_base_bdevs_discovered": 1, 00:08:42.040 "num_base_bdevs_operational": 2, 00:08:42.040 "base_bdevs_list": [ 00:08:42.040 { 00:08:42.040 "name": "BaseBdev1", 00:08:42.040 "uuid": "21a3d8d0-4225-11ef-aa83-81fbc7dfef58", 00:08:42.040 "is_configured": true, 00:08:42.040 "data_offset": 2048, 00:08:42.040 "data_size": 63488 00:08:42.040 }, 00:08:42.040 { 00:08:42.040 "name": "BaseBdev2", 00:08:42.040 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:42.040 "is_configured": false, 00:08:42.040 "data_offset": 0, 00:08:42.040 "data_size": 0 00:08:42.040 } 00:08:42.040 ] 00:08:42.040 }' 00:08:42.040 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:42.040 21:07:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:42.298 21:07:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:08:42.555 [2024-07-14 21:07:54.038629] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:08:42.555 [2024-07-14 21:07:54.038707] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3d70d1434a00 00:08:42.555 [2024-07-14 21:07:54.038713] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:42.555 [2024-07-14 21:07:54.038734] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3d70d1497e20 00:08:42.555 [2024-07-14 21:07:54.038780] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3d70d1434a00 00:08:42.555 [2024-07-14 21:07:54.038785] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3d70d1434a00 00:08:42.555 [2024-07-14 21:07:54.038805] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:42.555 BaseBdev2 00:08:42.555 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:08:42.555 21:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:08:42.555 21:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:42.555 21:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:08:42.555 21:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:42.556 21:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:42.556 21:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:08:42.813 21:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:08:43.072 [ 00:08:43.072 { 00:08:43.072 "name": "BaseBdev2", 00:08:43.072 "aliases": [ 00:08:43.072 "2302cf95-4225-11ef-aa83-81fbc7dfef58" 00:08:43.072 ], 00:08:43.072 "product_name": "Malloc disk", 00:08:43.072 "block_size": 512, 00:08:43.072 "num_blocks": 65536, 00:08:43.072 "uuid": "2302cf95-4225-11ef-aa83-81fbc7dfef58", 00:08:43.072 "assigned_rate_limits": { 00:08:43.072 "rw_ios_per_sec": 0, 00:08:43.072 "rw_mbytes_per_sec": 0, 00:08:43.072 "r_mbytes_per_sec": 0, 00:08:43.072 "w_mbytes_per_sec": 0 00:08:43.072 }, 00:08:43.072 "claimed": true, 00:08:43.072 "claim_type": "exclusive_write", 00:08:43.072 "zoned": false, 00:08:43.072 "supported_io_types": { 00:08:43.072 "read": true, 00:08:43.072 "write": true, 00:08:43.072 "unmap": true, 00:08:43.072 "flush": true, 00:08:43.072 "reset": true, 00:08:43.072 "nvme_admin": false, 00:08:43.072 "nvme_io": false, 00:08:43.072 "nvme_io_md": false, 00:08:43.072 "write_zeroes": true, 00:08:43.072 "zcopy": true, 00:08:43.072 "get_zone_info": false, 00:08:43.072 "zone_management": false, 00:08:43.072 "zone_append": false, 00:08:43.072 "compare": false, 00:08:43.072 "compare_and_write": false, 00:08:43.072 "abort": true, 00:08:43.072 "seek_hole": false, 00:08:43.072 "seek_data": false, 00:08:43.072 "copy": true, 00:08:43.072 "nvme_iov_md": false 00:08:43.072 }, 00:08:43.072 "memory_domains": [ 00:08:43.072 { 00:08:43.072 "dma_device_id": "system", 00:08:43.072 "dma_device_type": 1 00:08:43.072 }, 00:08:43.072 { 00:08:43.072 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.072 "dma_device_type": 2 00:08:43.072 } 00:08:43.072 ], 00:08:43.072 "driver_specific": {} 00:08:43.072 } 00:08:43.072 ] 00:08:43.072 21:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:08:43.072 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:08:43.072 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:08:43.072 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:08:43.072 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:43.072 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:43.072 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:43.072 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:43.072 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:43.072 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:43.072 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:43.072 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:43.072 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:43.072 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:43.072 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:43.330 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:43.330 "name": "Existed_Raid", 00:08:43.330 "uuid": "228c6658-4225-11ef-aa83-81fbc7dfef58", 00:08:43.330 "strip_size_kb": 0, 00:08:43.330 "state": "online", 00:08:43.330 "raid_level": "raid1", 00:08:43.330 "superblock": true, 00:08:43.330 "num_base_bdevs": 2, 00:08:43.330 "num_base_bdevs_discovered": 2, 00:08:43.330 "num_base_bdevs_operational": 2, 00:08:43.330 "base_bdevs_list": [ 00:08:43.330 { 00:08:43.330 "name": "BaseBdev1", 00:08:43.330 "uuid": "21a3d8d0-4225-11ef-aa83-81fbc7dfef58", 00:08:43.330 "is_configured": true, 00:08:43.330 "data_offset": 2048, 00:08:43.330 "data_size": 63488 00:08:43.330 }, 00:08:43.330 { 00:08:43.330 "name": "BaseBdev2", 00:08:43.330 "uuid": "2302cf95-4225-11ef-aa83-81fbc7dfef58", 00:08:43.330 "is_configured": true, 00:08:43.330 "data_offset": 2048, 00:08:43.330 "data_size": 63488 00:08:43.330 } 00:08:43.330 ] 00:08:43.330 }' 00:08:43.330 21:07:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:43.330 21:07:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:43.588 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:08:43.588 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:08:43.588 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:43.588 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:43.588 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:43.588 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:08:43.588 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:08:43.588 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:43.846 [2024-07-14 21:07:55.290561] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:43.846 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:43.846 "name": "Existed_Raid", 00:08:43.846 "aliases": [ 00:08:43.846 "228c6658-4225-11ef-aa83-81fbc7dfef58" 00:08:43.846 ], 00:08:43.846 "product_name": "Raid Volume", 00:08:43.846 "block_size": 512, 00:08:43.846 "num_blocks": 63488, 00:08:43.846 "uuid": "228c6658-4225-11ef-aa83-81fbc7dfef58", 00:08:43.846 "assigned_rate_limits": { 00:08:43.846 "rw_ios_per_sec": 0, 00:08:43.846 "rw_mbytes_per_sec": 0, 00:08:43.846 "r_mbytes_per_sec": 0, 00:08:43.846 "w_mbytes_per_sec": 0 00:08:43.846 }, 00:08:43.846 "claimed": false, 00:08:43.846 "zoned": false, 00:08:43.846 "supported_io_types": { 00:08:43.846 "read": true, 00:08:43.846 "write": true, 00:08:43.846 "unmap": false, 00:08:43.846 "flush": false, 00:08:43.846 "reset": true, 00:08:43.846 "nvme_admin": false, 00:08:43.846 "nvme_io": false, 00:08:43.846 "nvme_io_md": false, 00:08:43.846 "write_zeroes": true, 00:08:43.846 "zcopy": false, 00:08:43.846 "get_zone_info": false, 00:08:43.846 "zone_management": false, 00:08:43.846 "zone_append": false, 00:08:43.846 "compare": false, 00:08:43.846 "compare_and_write": false, 00:08:43.846 "abort": false, 00:08:43.846 "seek_hole": false, 00:08:43.846 "seek_data": false, 00:08:43.846 "copy": false, 00:08:43.846 "nvme_iov_md": false 00:08:43.846 }, 00:08:43.846 "memory_domains": [ 00:08:43.846 { 00:08:43.846 "dma_device_id": "system", 00:08:43.846 "dma_device_type": 1 00:08:43.846 }, 00:08:43.846 { 00:08:43.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.846 "dma_device_type": 2 00:08:43.846 }, 00:08:43.846 { 00:08:43.846 "dma_device_id": "system", 00:08:43.846 "dma_device_type": 1 00:08:43.846 }, 00:08:43.846 { 00:08:43.846 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:43.846 "dma_device_type": 2 00:08:43.846 } 00:08:43.846 ], 00:08:43.846 "driver_specific": { 00:08:43.846 "raid": { 00:08:43.846 "uuid": "228c6658-4225-11ef-aa83-81fbc7dfef58", 00:08:43.846 "strip_size_kb": 0, 00:08:43.846 "state": "online", 00:08:43.846 "raid_level": "raid1", 00:08:43.846 "superblock": true, 00:08:43.846 "num_base_bdevs": 2, 00:08:43.846 "num_base_bdevs_discovered": 2, 00:08:43.846 "num_base_bdevs_operational": 2, 00:08:43.846 "base_bdevs_list": [ 00:08:43.846 { 00:08:43.846 "name": "BaseBdev1", 00:08:43.846 "uuid": "21a3d8d0-4225-11ef-aa83-81fbc7dfef58", 00:08:43.846 "is_configured": true, 00:08:43.846 "data_offset": 2048, 00:08:43.846 "data_size": 63488 00:08:43.846 }, 00:08:43.846 { 00:08:43.846 "name": "BaseBdev2", 00:08:43.846 "uuid": "2302cf95-4225-11ef-aa83-81fbc7dfef58", 00:08:43.846 "is_configured": true, 00:08:43.846 "data_offset": 2048, 00:08:43.846 "data_size": 63488 00:08:43.846 } 00:08:43.846 ] 00:08:43.846 } 00:08:43.846 } 00:08:43.846 }' 00:08:43.846 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:43.846 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:08:43.846 BaseBdev2' 00:08:43.846 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:43.846 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:08:43.846 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:44.105 "name": "BaseBdev1", 00:08:44.105 "aliases": [ 00:08:44.105 "21a3d8d0-4225-11ef-aa83-81fbc7dfef58" 00:08:44.105 ], 00:08:44.105 "product_name": "Malloc disk", 00:08:44.105 "block_size": 512, 00:08:44.105 "num_blocks": 65536, 00:08:44.105 "uuid": "21a3d8d0-4225-11ef-aa83-81fbc7dfef58", 00:08:44.105 "assigned_rate_limits": { 00:08:44.105 "rw_ios_per_sec": 0, 00:08:44.105 "rw_mbytes_per_sec": 0, 00:08:44.105 "r_mbytes_per_sec": 0, 00:08:44.105 "w_mbytes_per_sec": 0 00:08:44.105 }, 00:08:44.105 "claimed": true, 00:08:44.105 "claim_type": "exclusive_write", 00:08:44.105 "zoned": false, 00:08:44.105 "supported_io_types": { 00:08:44.105 "read": true, 00:08:44.105 "write": true, 00:08:44.105 "unmap": true, 00:08:44.105 "flush": true, 00:08:44.105 "reset": true, 00:08:44.105 "nvme_admin": false, 00:08:44.105 "nvme_io": false, 00:08:44.105 "nvme_io_md": false, 00:08:44.105 "write_zeroes": true, 00:08:44.105 "zcopy": true, 00:08:44.105 "get_zone_info": false, 00:08:44.105 "zone_management": false, 00:08:44.105 "zone_append": false, 00:08:44.105 "compare": false, 00:08:44.105 "compare_and_write": false, 00:08:44.105 "abort": true, 00:08:44.105 "seek_hole": false, 00:08:44.105 "seek_data": false, 00:08:44.105 "copy": true, 00:08:44.105 "nvme_iov_md": false 00:08:44.105 }, 00:08:44.105 "memory_domains": [ 00:08:44.105 { 00:08:44.105 "dma_device_id": "system", 00:08:44.105 "dma_device_type": 1 00:08:44.105 }, 00:08:44.105 { 00:08:44.105 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.105 "dma_device_type": 2 00:08:44.105 } 00:08:44.105 ], 00:08:44.105 "driver_specific": {} 00:08:44.105 }' 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:08:44.105 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:44.363 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:44.363 "name": "BaseBdev2", 00:08:44.363 "aliases": [ 00:08:44.363 "2302cf95-4225-11ef-aa83-81fbc7dfef58" 00:08:44.363 ], 00:08:44.363 "product_name": "Malloc disk", 00:08:44.363 "block_size": 512, 00:08:44.363 "num_blocks": 65536, 00:08:44.363 "uuid": "2302cf95-4225-11ef-aa83-81fbc7dfef58", 00:08:44.363 "assigned_rate_limits": { 00:08:44.363 "rw_ios_per_sec": 0, 00:08:44.363 "rw_mbytes_per_sec": 0, 00:08:44.363 "r_mbytes_per_sec": 0, 00:08:44.363 "w_mbytes_per_sec": 0 00:08:44.363 }, 00:08:44.363 "claimed": true, 00:08:44.363 "claim_type": "exclusive_write", 00:08:44.363 "zoned": false, 00:08:44.363 "supported_io_types": { 00:08:44.363 "read": true, 00:08:44.363 "write": true, 00:08:44.363 "unmap": true, 00:08:44.363 "flush": true, 00:08:44.363 "reset": true, 00:08:44.363 "nvme_admin": false, 00:08:44.363 "nvme_io": false, 00:08:44.363 "nvme_io_md": false, 00:08:44.363 "write_zeroes": true, 00:08:44.363 "zcopy": true, 00:08:44.363 "get_zone_info": false, 00:08:44.363 "zone_management": false, 00:08:44.363 "zone_append": false, 00:08:44.363 "compare": false, 00:08:44.363 "compare_and_write": false, 00:08:44.363 "abort": true, 00:08:44.363 "seek_hole": false, 00:08:44.363 "seek_data": false, 00:08:44.363 "copy": true, 00:08:44.363 "nvme_iov_md": false 00:08:44.363 }, 00:08:44.363 "memory_domains": [ 00:08:44.363 { 00:08:44.363 "dma_device_id": "system", 00:08:44.363 "dma_device_type": 1 00:08:44.363 }, 00:08:44.363 { 00:08:44.363 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:44.363 "dma_device_type": 2 00:08:44.363 } 00:08:44.363 ], 00:08:44.363 "driver_specific": {} 00:08:44.363 }' 00:08:44.621 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:44.621 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:44.621 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:44.621 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:44.621 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:44.621 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:44.621 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:44.621 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:44.621 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:44.621 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:44.621 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:44.621 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:44.621 21:07:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:08:44.879 [2024-07-14 21:07:56.238540] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:44.879 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:08:45.138 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:45.138 "name": "Existed_Raid", 00:08:45.138 "uuid": "228c6658-4225-11ef-aa83-81fbc7dfef58", 00:08:45.138 "strip_size_kb": 0, 00:08:45.138 "state": "online", 00:08:45.138 "raid_level": "raid1", 00:08:45.138 "superblock": true, 00:08:45.138 "num_base_bdevs": 2, 00:08:45.138 "num_base_bdevs_discovered": 1, 00:08:45.138 "num_base_bdevs_operational": 1, 00:08:45.138 "base_bdevs_list": [ 00:08:45.138 { 00:08:45.138 "name": null, 00:08:45.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:45.138 "is_configured": false, 00:08:45.138 "data_offset": 2048, 00:08:45.138 "data_size": 63488 00:08:45.138 }, 00:08:45.138 { 00:08:45.138 "name": "BaseBdev2", 00:08:45.138 "uuid": "2302cf95-4225-11ef-aa83-81fbc7dfef58", 00:08:45.138 "is_configured": true, 00:08:45.138 "data_offset": 2048, 00:08:45.138 "data_size": 63488 00:08:45.138 } 00:08:45.138 ] 00:08:45.138 }' 00:08:45.138 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:45.138 21:07:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:45.396 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:08:45.396 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:45.396 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:45.396 21:07:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:08:45.654 21:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:08:45.655 21:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:08:45.655 21:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:08:45.913 [2024-07-14 21:07:57.376548] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:08:45.913 [2024-07-14 21:07:57.376601] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:45.913 [2024-07-14 21:07:57.382670] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:45.913 [2024-07-14 21:07:57.382688] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:45.913 [2024-07-14 21:07:57.382692] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3d70d1434a00 name Existed_Raid, state offline 00:08:45.913 21:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:08:45.913 21:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:08:45.913 21:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:08:45.913 21:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:46.171 21:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:08:46.171 21:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:08:46.171 21:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:08:46.171 21:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 51006 00:08:46.171 21:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 51006 ']' 00:08:46.171 21:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 51006 00:08:46.171 21:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:08:46.171 21:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:46.171 21:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 51006 00:08:46.171 21:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:08:46.171 21:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:46.171 killing process with pid 51006 00:08:46.171 21:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:46.171 21:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51006' 00:08:46.171 21:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 51006 00:08:46.171 [2024-07-14 21:07:57.689195] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:46.171 [2024-07-14 21:07:57.689228] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:46.171 21:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 51006 00:08:46.430 21:07:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:08:46.430 00:08:46.430 real 0m8.840s 00:08:46.430 user 0m15.235s 00:08:46.430 sys 0m1.696s 00:08:46.430 ************************************ 00:08:46.430 END TEST raid_state_function_test_sb 00:08:46.430 ************************************ 00:08:46.430 21:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:46.430 21:07:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:08:46.430 21:07:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:46.430 21:07:57 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 2 00:08:46.430 21:07:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:08:46.430 21:07:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:46.430 21:07:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:08:46.430 ************************************ 00:08:46.430 START TEST raid_superblock_test 00:08:46.430 ************************************ 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=51280 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 51280 /var/tmp/spdk-raid.sock 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 51280 ']' 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:46.430 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:46.430 21:07:57 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:46.430 [2024-07-14 21:07:57.930924] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:08:46.430 [2024-07-14 21:07:57.931236] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:08:46.997 EAL: TSC is not safe to use in SMP mode 00:08:46.997 EAL: TSC is not invariant 00:08:46.997 [2024-07-14 21:07:58.473189] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:47.255 [2024-07-14 21:07:58.568004] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:08:47.255 [2024-07-14 21:07:58.570459] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:47.255 [2024-07-14 21:07:58.571304] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.255 [2024-07-14 21:07:58.571334] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:08:47.514 21:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:47.514 21:07:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:08:47.514 21:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:08:47.514 21:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:47.514 21:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:08:47.514 21:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:08:47.514 21:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:08:47.514 21:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:47.514 21:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:47.514 21:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:47.514 21:07:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:08:47.773 malloc1 00:08:47.773 21:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:48.030 [2024-07-14 21:07:59.436116] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:48.030 [2024-07-14 21:07:59.436182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.030 [2024-07-14 21:07:59.436209] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeda64e34780 00:08:48.030 [2024-07-14 21:07:59.436217] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.030 [2024-07-14 21:07:59.437273] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.030 [2024-07-14 21:07:59.437314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:48.030 pt1 00:08:48.030 21:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:48.030 21:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:48.030 21:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:08:48.030 21:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:08:48.030 21:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:08:48.030 21:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:08:48.030 21:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:08:48.030 21:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:08:48.030 21:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:08:48.288 malloc2 00:08:48.288 21:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:48.546 [2024-07-14 21:07:59.940131] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:48.546 [2024-07-14 21:07:59.940198] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:48.546 [2024-07-14 21:07:59.940211] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeda64e34c80 00:08:48.546 [2024-07-14 21:07:59.940219] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:48.546 [2024-07-14 21:07:59.940913] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:48.546 [2024-07-14 21:07:59.940942] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:48.546 pt2 00:08:48.546 21:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:08:48.546 21:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:08:48.546 21:07:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:08:48.805 [2024-07-14 21:08:00.260168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:48.805 [2024-07-14 21:08:00.260815] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:48.805 [2024-07-14 21:08:00.260885] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xeda64e34f00 00:08:48.805 [2024-07-14 21:08:00.260892] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:48.805 [2024-07-14 21:08:00.260932] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xeda64e97e20 00:08:48.805 [2024-07-14 21:08:00.261030] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xeda64e34f00 00:08:48.805 [2024-07-14 21:08:00.261035] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xeda64e34f00 00:08:48.805 [2024-07-14 21:08:00.261072] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:48.805 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:48.805 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:48.805 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:48.805 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:48.805 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:48.805 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:48.805 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:48.805 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:48.805 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:48.805 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:48.805 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:48.805 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:49.372 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:49.372 "name": "raid_bdev1", 00:08:49.372 "uuid": "26b82a47-4225-11ef-aa83-81fbc7dfef58", 00:08:49.372 "strip_size_kb": 0, 00:08:49.372 "state": "online", 00:08:49.372 "raid_level": "raid1", 00:08:49.372 "superblock": true, 00:08:49.372 "num_base_bdevs": 2, 00:08:49.372 "num_base_bdevs_discovered": 2, 00:08:49.372 "num_base_bdevs_operational": 2, 00:08:49.372 "base_bdevs_list": [ 00:08:49.372 { 00:08:49.372 "name": "pt1", 00:08:49.372 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:49.372 "is_configured": true, 00:08:49.372 "data_offset": 2048, 00:08:49.372 "data_size": 63488 00:08:49.372 }, 00:08:49.372 { 00:08:49.372 "name": "pt2", 00:08:49.372 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.372 "is_configured": true, 00:08:49.372 "data_offset": 2048, 00:08:49.372 "data_size": 63488 00:08:49.372 } 00:08:49.372 ] 00:08:49.372 }' 00:08:49.372 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:49.372 21:08:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:49.631 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:08:49.631 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:49.631 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:49.631 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:49.631 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:49.631 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:49.631 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:49.631 21:08:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:49.889 [2024-07-14 21:08:01.244178] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:49.889 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:49.889 "name": "raid_bdev1", 00:08:49.889 "aliases": [ 00:08:49.889 "26b82a47-4225-11ef-aa83-81fbc7dfef58" 00:08:49.889 ], 00:08:49.889 "product_name": "Raid Volume", 00:08:49.889 "block_size": 512, 00:08:49.889 "num_blocks": 63488, 00:08:49.889 "uuid": "26b82a47-4225-11ef-aa83-81fbc7dfef58", 00:08:49.889 "assigned_rate_limits": { 00:08:49.889 "rw_ios_per_sec": 0, 00:08:49.889 "rw_mbytes_per_sec": 0, 00:08:49.889 "r_mbytes_per_sec": 0, 00:08:49.889 "w_mbytes_per_sec": 0 00:08:49.889 }, 00:08:49.889 "claimed": false, 00:08:49.889 "zoned": false, 00:08:49.889 "supported_io_types": { 00:08:49.889 "read": true, 00:08:49.889 "write": true, 00:08:49.889 "unmap": false, 00:08:49.889 "flush": false, 00:08:49.889 "reset": true, 00:08:49.889 "nvme_admin": false, 00:08:49.889 "nvme_io": false, 00:08:49.889 "nvme_io_md": false, 00:08:49.889 "write_zeroes": true, 00:08:49.889 "zcopy": false, 00:08:49.889 "get_zone_info": false, 00:08:49.889 "zone_management": false, 00:08:49.889 "zone_append": false, 00:08:49.889 "compare": false, 00:08:49.889 "compare_and_write": false, 00:08:49.889 "abort": false, 00:08:49.889 "seek_hole": false, 00:08:49.889 "seek_data": false, 00:08:49.889 "copy": false, 00:08:49.889 "nvme_iov_md": false 00:08:49.889 }, 00:08:49.889 "memory_domains": [ 00:08:49.889 { 00:08:49.889 "dma_device_id": "system", 00:08:49.889 "dma_device_type": 1 00:08:49.889 }, 00:08:49.889 { 00:08:49.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.889 "dma_device_type": 2 00:08:49.889 }, 00:08:49.889 { 00:08:49.889 "dma_device_id": "system", 00:08:49.889 "dma_device_type": 1 00:08:49.889 }, 00:08:49.889 { 00:08:49.889 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:49.889 "dma_device_type": 2 00:08:49.889 } 00:08:49.889 ], 00:08:49.889 "driver_specific": { 00:08:49.889 "raid": { 00:08:49.889 "uuid": "26b82a47-4225-11ef-aa83-81fbc7dfef58", 00:08:49.889 "strip_size_kb": 0, 00:08:49.889 "state": "online", 00:08:49.889 "raid_level": "raid1", 00:08:49.889 "superblock": true, 00:08:49.889 "num_base_bdevs": 2, 00:08:49.889 "num_base_bdevs_discovered": 2, 00:08:49.889 "num_base_bdevs_operational": 2, 00:08:49.889 "base_bdevs_list": [ 00:08:49.889 { 00:08:49.889 "name": "pt1", 00:08:49.889 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:49.889 "is_configured": true, 00:08:49.889 "data_offset": 2048, 00:08:49.889 "data_size": 63488 00:08:49.889 }, 00:08:49.889 { 00:08:49.889 "name": "pt2", 00:08:49.889 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:49.889 "is_configured": true, 00:08:49.889 "data_offset": 2048, 00:08:49.889 "data_size": 63488 00:08:49.889 } 00:08:49.889 ] 00:08:49.889 } 00:08:49.889 } 00:08:49.889 }' 00:08:49.889 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:49.889 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:49.889 pt2' 00:08:49.889 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:49.889 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:49.889 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:50.176 "name": "pt1", 00:08:50.176 "aliases": [ 00:08:50.176 "00000000-0000-0000-0000-000000000001" 00:08:50.176 ], 00:08:50.176 "product_name": "passthru", 00:08:50.176 "block_size": 512, 00:08:50.176 "num_blocks": 65536, 00:08:50.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:50.176 "assigned_rate_limits": { 00:08:50.176 "rw_ios_per_sec": 0, 00:08:50.176 "rw_mbytes_per_sec": 0, 00:08:50.176 "r_mbytes_per_sec": 0, 00:08:50.176 "w_mbytes_per_sec": 0 00:08:50.176 }, 00:08:50.176 "claimed": true, 00:08:50.176 "claim_type": "exclusive_write", 00:08:50.176 "zoned": false, 00:08:50.176 "supported_io_types": { 00:08:50.176 "read": true, 00:08:50.176 "write": true, 00:08:50.176 "unmap": true, 00:08:50.176 "flush": true, 00:08:50.176 "reset": true, 00:08:50.176 "nvme_admin": false, 00:08:50.176 "nvme_io": false, 00:08:50.176 "nvme_io_md": false, 00:08:50.176 "write_zeroes": true, 00:08:50.176 "zcopy": true, 00:08:50.176 "get_zone_info": false, 00:08:50.176 "zone_management": false, 00:08:50.176 "zone_append": false, 00:08:50.176 "compare": false, 00:08:50.176 "compare_and_write": false, 00:08:50.176 "abort": true, 00:08:50.176 "seek_hole": false, 00:08:50.176 "seek_data": false, 00:08:50.176 "copy": true, 00:08:50.176 "nvme_iov_md": false 00:08:50.176 }, 00:08:50.176 "memory_domains": [ 00:08:50.176 { 00:08:50.176 "dma_device_id": "system", 00:08:50.176 "dma_device_type": 1 00:08:50.176 }, 00:08:50.176 { 00:08:50.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.176 "dma_device_type": 2 00:08:50.176 } 00:08:50.176 ], 00:08:50.176 "driver_specific": { 00:08:50.176 "passthru": { 00:08:50.176 "name": "pt1", 00:08:50.176 "base_bdev_name": "malloc1" 00:08:50.176 } 00:08:50.176 } 00:08:50.176 }' 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:50.176 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:50.434 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:50.434 "name": "pt2", 00:08:50.434 "aliases": [ 00:08:50.434 "00000000-0000-0000-0000-000000000002" 00:08:50.434 ], 00:08:50.434 "product_name": "passthru", 00:08:50.434 "block_size": 512, 00:08:50.434 "num_blocks": 65536, 00:08:50.434 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:50.434 "assigned_rate_limits": { 00:08:50.434 "rw_ios_per_sec": 0, 00:08:50.434 "rw_mbytes_per_sec": 0, 00:08:50.434 "r_mbytes_per_sec": 0, 00:08:50.434 "w_mbytes_per_sec": 0 00:08:50.434 }, 00:08:50.434 "claimed": true, 00:08:50.434 "claim_type": "exclusive_write", 00:08:50.434 "zoned": false, 00:08:50.434 "supported_io_types": { 00:08:50.434 "read": true, 00:08:50.434 "write": true, 00:08:50.434 "unmap": true, 00:08:50.434 "flush": true, 00:08:50.434 "reset": true, 00:08:50.434 "nvme_admin": false, 00:08:50.434 "nvme_io": false, 00:08:50.434 "nvme_io_md": false, 00:08:50.434 "write_zeroes": true, 00:08:50.434 "zcopy": true, 00:08:50.434 "get_zone_info": false, 00:08:50.434 "zone_management": false, 00:08:50.434 "zone_append": false, 00:08:50.434 "compare": false, 00:08:50.434 "compare_and_write": false, 00:08:50.434 "abort": true, 00:08:50.434 "seek_hole": false, 00:08:50.434 "seek_data": false, 00:08:50.434 "copy": true, 00:08:50.434 "nvme_iov_md": false 00:08:50.434 }, 00:08:50.434 "memory_domains": [ 00:08:50.434 { 00:08:50.434 "dma_device_id": "system", 00:08:50.434 "dma_device_type": 1 00:08:50.434 }, 00:08:50.434 { 00:08:50.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:50.434 "dma_device_type": 2 00:08:50.434 } 00:08:50.434 ], 00:08:50.434 "driver_specific": { 00:08:50.434 "passthru": { 00:08:50.434 "name": "pt2", 00:08:50.434 "base_bdev_name": "malloc2" 00:08:50.434 } 00:08:50.434 } 00:08:50.434 }' 00:08:50.434 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:50.434 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:50.434 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:50.434 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:50.434 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:50.434 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:50.434 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:50.434 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:50.434 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:50.434 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:50.434 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:50.434 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:50.434 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:08:50.434 21:08:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:50.692 [2024-07-14 21:08:02.176202] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:50.692 21:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=26b82a47-4225-11ef-aa83-81fbc7dfef58 00:08:50.692 21:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 26b82a47-4225-11ef-aa83-81fbc7dfef58 ']' 00:08:50.692 21:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:50.951 [2024-07-14 21:08:02.456167] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:50.951 [2024-07-14 21:08:02.456187] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:50.951 [2024-07-14 21:08:02.456234] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:50.951 [2024-07-14 21:08:02.456249] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:50.951 [2024-07-14 21:08:02.456253] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xeda64e34f00 name raid_bdev1, state offline 00:08:50.951 21:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:50.951 21:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:08:51.209 21:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:08:51.209 21:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:08:51.209 21:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.209 21:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:51.467 21:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:08:51.467 21:08:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:51.725 21:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:08:51.725 21:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:08:51.983 21:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:08:51.983 21:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:51.983 21:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:08:51.983 21:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:51.983 21:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:51.983 21:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:51.983 21:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:51.983 21:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:51.983 21:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:52.241 21:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:52.241 21:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:52.241 21:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:08:52.241 21:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:08:52.241 [2024-07-14 21:08:03.740236] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:08:52.241 [2024-07-14 21:08:03.741003] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:08:52.241 [2024-07-14 21:08:03.741030] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:08:52.241 [2024-07-14 21:08:03.741071] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:08:52.241 [2024-07-14 21:08:03.741083] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:52.242 [2024-07-14 21:08:03.741087] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xeda64e34c80 name raid_bdev1, state configuring 00:08:52.242 request: 00:08:52.242 { 00:08:52.242 "name": "raid_bdev1", 00:08:52.242 "raid_level": "raid1", 00:08:52.242 "base_bdevs": [ 00:08:52.242 "malloc1", 00:08:52.242 "malloc2" 00:08:52.242 ], 00:08:52.242 "superblock": false, 00:08:52.242 "method": "bdev_raid_create", 00:08:52.242 "req_id": 1 00:08:52.242 } 00:08:52.242 Got JSON-RPC error response 00:08:52.242 response: 00:08:52.242 { 00:08:52.242 "code": -17, 00:08:52.242 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:08:52.242 } 00:08:52.242 21:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:08:52.242 21:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:52.242 21:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:52.242 21:08:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:52.242 21:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:52.242 21:08:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:08:52.500 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:08:52.500 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:08:52.500 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:52.759 [2024-07-14 21:08:04.268237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:52.759 [2024-07-14 21:08:04.268305] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:52.759 [2024-07-14 21:08:04.268384] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeda64e34780 00:08:52.759 [2024-07-14 21:08:04.268392] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:52.759 [2024-07-14 21:08:04.269151] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:52.759 [2024-07-14 21:08:04.269176] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:52.759 [2024-07-14 21:08:04.269217] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:52.759 [2024-07-14 21:08:04.269229] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:52.759 pt1 00:08:52.759 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:08:52.759 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:52.759 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:08:52.759 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:52.759 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:52.759 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:52.759 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:52.759 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:52.759 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:52.759 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:52.759 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:52.759 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.017 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:53.017 "name": "raid_bdev1", 00:08:53.017 "uuid": "26b82a47-4225-11ef-aa83-81fbc7dfef58", 00:08:53.017 "strip_size_kb": 0, 00:08:53.017 "state": "configuring", 00:08:53.017 "raid_level": "raid1", 00:08:53.017 "superblock": true, 00:08:53.017 "num_base_bdevs": 2, 00:08:53.017 "num_base_bdevs_discovered": 1, 00:08:53.017 "num_base_bdevs_operational": 2, 00:08:53.017 "base_bdevs_list": [ 00:08:53.017 { 00:08:53.017 "name": "pt1", 00:08:53.017 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.017 "is_configured": true, 00:08:53.017 "data_offset": 2048, 00:08:53.017 "data_size": 63488 00:08:53.017 }, 00:08:53.017 { 00:08:53.017 "name": null, 00:08:53.017 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.017 "is_configured": false, 00:08:53.017 "data_offset": 2048, 00:08:53.017 "data_size": 63488 00:08:53.017 } 00:08:53.017 ] 00:08:53.017 }' 00:08:53.017 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:53.017 21:08:04 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:53.583 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:08:53.583 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:08:53.583 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:53.583 21:08:04 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:53.583 [2024-07-14 21:08:05.068273] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:53.583 [2024-07-14 21:08:05.068371] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:53.583 [2024-07-14 21:08:05.068383] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeda64e34f00 00:08:53.583 [2024-07-14 21:08:05.068391] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:53.583 [2024-07-14 21:08:05.068507] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:53.583 [2024-07-14 21:08:05.068518] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:53.583 [2024-07-14 21:08:05.068543] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:53.583 [2024-07-14 21:08:05.068551] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:53.583 [2024-07-14 21:08:05.068580] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xeda64e35180 00:08:53.583 [2024-07-14 21:08:05.068585] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:53.583 [2024-07-14 21:08:05.068615] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xeda64e97e20 00:08:53.583 [2024-07-14 21:08:05.068672] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xeda64e35180 00:08:53.583 [2024-07-14 21:08:05.068677] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xeda64e35180 00:08:53.583 [2024-07-14 21:08:05.068700] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:53.583 pt2 00:08:53.583 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:08:53.583 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:08:53.583 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:08:53.583 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:53.583 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:53.583 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:53.583 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:53.583 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:08:53.583 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:53.583 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:53.583 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:53.583 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:53.583 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:53.583 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:53.842 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:53.842 "name": "raid_bdev1", 00:08:53.842 "uuid": "26b82a47-4225-11ef-aa83-81fbc7dfef58", 00:08:53.842 "strip_size_kb": 0, 00:08:53.842 "state": "online", 00:08:53.842 "raid_level": "raid1", 00:08:53.842 "superblock": true, 00:08:53.842 "num_base_bdevs": 2, 00:08:53.842 "num_base_bdevs_discovered": 2, 00:08:53.842 "num_base_bdevs_operational": 2, 00:08:53.842 "base_bdevs_list": [ 00:08:53.842 { 00:08:53.842 "name": "pt1", 00:08:53.842 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:53.842 "is_configured": true, 00:08:53.842 "data_offset": 2048, 00:08:53.842 "data_size": 63488 00:08:53.842 }, 00:08:53.842 { 00:08:53.842 "name": "pt2", 00:08:53.842 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:53.842 "is_configured": true, 00:08:53.842 "data_offset": 2048, 00:08:53.842 "data_size": 63488 00:08:53.842 } 00:08:53.842 ] 00:08:53.842 }' 00:08:53.842 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:53.842 21:08:05 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:54.410 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:08:54.410 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:08:54.410 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:08:54.410 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:08:54.410 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:08:54.410 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:08:54.410 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:54.410 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:08:54.410 [2024-07-14 21:08:05.936365] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:54.410 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:08:54.410 "name": "raid_bdev1", 00:08:54.410 "aliases": [ 00:08:54.410 "26b82a47-4225-11ef-aa83-81fbc7dfef58" 00:08:54.410 ], 00:08:54.410 "product_name": "Raid Volume", 00:08:54.410 "block_size": 512, 00:08:54.410 "num_blocks": 63488, 00:08:54.410 "uuid": "26b82a47-4225-11ef-aa83-81fbc7dfef58", 00:08:54.410 "assigned_rate_limits": { 00:08:54.410 "rw_ios_per_sec": 0, 00:08:54.410 "rw_mbytes_per_sec": 0, 00:08:54.410 "r_mbytes_per_sec": 0, 00:08:54.410 "w_mbytes_per_sec": 0 00:08:54.410 }, 00:08:54.410 "claimed": false, 00:08:54.410 "zoned": false, 00:08:54.410 "supported_io_types": { 00:08:54.410 "read": true, 00:08:54.410 "write": true, 00:08:54.410 "unmap": false, 00:08:54.410 "flush": false, 00:08:54.410 "reset": true, 00:08:54.410 "nvme_admin": false, 00:08:54.410 "nvme_io": false, 00:08:54.410 "nvme_io_md": false, 00:08:54.410 "write_zeroes": true, 00:08:54.410 "zcopy": false, 00:08:54.410 "get_zone_info": false, 00:08:54.410 "zone_management": false, 00:08:54.410 "zone_append": false, 00:08:54.410 "compare": false, 00:08:54.410 "compare_and_write": false, 00:08:54.410 "abort": false, 00:08:54.410 "seek_hole": false, 00:08:54.410 "seek_data": false, 00:08:54.410 "copy": false, 00:08:54.410 "nvme_iov_md": false 00:08:54.410 }, 00:08:54.410 "memory_domains": [ 00:08:54.410 { 00:08:54.410 "dma_device_id": "system", 00:08:54.410 "dma_device_type": 1 00:08:54.410 }, 00:08:54.410 { 00:08:54.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.410 "dma_device_type": 2 00:08:54.410 }, 00:08:54.410 { 00:08:54.410 "dma_device_id": "system", 00:08:54.410 "dma_device_type": 1 00:08:54.410 }, 00:08:54.410 { 00:08:54.410 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.410 "dma_device_type": 2 00:08:54.410 } 00:08:54.410 ], 00:08:54.410 "driver_specific": { 00:08:54.410 "raid": { 00:08:54.410 "uuid": "26b82a47-4225-11ef-aa83-81fbc7dfef58", 00:08:54.410 "strip_size_kb": 0, 00:08:54.410 "state": "online", 00:08:54.410 "raid_level": "raid1", 00:08:54.410 "superblock": true, 00:08:54.410 "num_base_bdevs": 2, 00:08:54.411 "num_base_bdevs_discovered": 2, 00:08:54.411 "num_base_bdevs_operational": 2, 00:08:54.411 "base_bdevs_list": [ 00:08:54.411 { 00:08:54.411 "name": "pt1", 00:08:54.411 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:54.411 "is_configured": true, 00:08:54.411 "data_offset": 2048, 00:08:54.411 "data_size": 63488 00:08:54.411 }, 00:08:54.411 { 00:08:54.411 "name": "pt2", 00:08:54.411 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:54.411 "is_configured": true, 00:08:54.411 "data_offset": 2048, 00:08:54.411 "data_size": 63488 00:08:54.411 } 00:08:54.411 ] 00:08:54.411 } 00:08:54.411 } 00:08:54.411 }' 00:08:54.411 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:08:54.670 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:08:54.670 pt2' 00:08:54.670 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:54.670 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:08:54.670 21:08:05 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:54.670 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:54.670 "name": "pt1", 00:08:54.670 "aliases": [ 00:08:54.670 "00000000-0000-0000-0000-000000000001" 00:08:54.670 ], 00:08:54.670 "product_name": "passthru", 00:08:54.670 "block_size": 512, 00:08:54.670 "num_blocks": 65536, 00:08:54.670 "uuid": "00000000-0000-0000-0000-000000000001", 00:08:54.670 "assigned_rate_limits": { 00:08:54.670 "rw_ios_per_sec": 0, 00:08:54.670 "rw_mbytes_per_sec": 0, 00:08:54.670 "r_mbytes_per_sec": 0, 00:08:54.670 "w_mbytes_per_sec": 0 00:08:54.670 }, 00:08:54.670 "claimed": true, 00:08:54.670 "claim_type": "exclusive_write", 00:08:54.670 "zoned": false, 00:08:54.670 "supported_io_types": { 00:08:54.670 "read": true, 00:08:54.670 "write": true, 00:08:54.670 "unmap": true, 00:08:54.670 "flush": true, 00:08:54.670 "reset": true, 00:08:54.670 "nvme_admin": false, 00:08:54.670 "nvme_io": false, 00:08:54.670 "nvme_io_md": false, 00:08:54.670 "write_zeroes": true, 00:08:54.670 "zcopy": true, 00:08:54.670 "get_zone_info": false, 00:08:54.670 "zone_management": false, 00:08:54.670 "zone_append": false, 00:08:54.670 "compare": false, 00:08:54.670 "compare_and_write": false, 00:08:54.670 "abort": true, 00:08:54.670 "seek_hole": false, 00:08:54.670 "seek_data": false, 00:08:54.670 "copy": true, 00:08:54.670 "nvme_iov_md": false 00:08:54.670 }, 00:08:54.670 "memory_domains": [ 00:08:54.670 { 00:08:54.670 "dma_device_id": "system", 00:08:54.670 "dma_device_type": 1 00:08:54.670 }, 00:08:54.670 { 00:08:54.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:54.670 "dma_device_type": 2 00:08:54.670 } 00:08:54.671 ], 00:08:54.671 "driver_specific": { 00:08:54.671 "passthru": { 00:08:54.671 "name": "pt1", 00:08:54.671 "base_bdev_name": "malloc1" 00:08:54.671 } 00:08:54.671 } 00:08:54.671 }' 00:08:54.671 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:54.929 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:54.929 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:54.929 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:54.929 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:54.929 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:54.929 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:54.929 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:54.929 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:54.929 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:54.929 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:54.929 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:54.929 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:08:54.929 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:08:54.929 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:08:55.187 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:08:55.187 "name": "pt2", 00:08:55.187 "aliases": [ 00:08:55.187 "00000000-0000-0000-0000-000000000002" 00:08:55.187 ], 00:08:55.187 "product_name": "passthru", 00:08:55.187 "block_size": 512, 00:08:55.187 "num_blocks": 65536, 00:08:55.187 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.187 "assigned_rate_limits": { 00:08:55.187 "rw_ios_per_sec": 0, 00:08:55.187 "rw_mbytes_per_sec": 0, 00:08:55.187 "r_mbytes_per_sec": 0, 00:08:55.187 "w_mbytes_per_sec": 0 00:08:55.187 }, 00:08:55.187 "claimed": true, 00:08:55.187 "claim_type": "exclusive_write", 00:08:55.187 "zoned": false, 00:08:55.187 "supported_io_types": { 00:08:55.187 "read": true, 00:08:55.187 "write": true, 00:08:55.187 "unmap": true, 00:08:55.187 "flush": true, 00:08:55.187 "reset": true, 00:08:55.187 "nvme_admin": false, 00:08:55.187 "nvme_io": false, 00:08:55.187 "nvme_io_md": false, 00:08:55.187 "write_zeroes": true, 00:08:55.187 "zcopy": true, 00:08:55.187 "get_zone_info": false, 00:08:55.187 "zone_management": false, 00:08:55.187 "zone_append": false, 00:08:55.187 "compare": false, 00:08:55.187 "compare_and_write": false, 00:08:55.187 "abort": true, 00:08:55.187 "seek_hole": false, 00:08:55.187 "seek_data": false, 00:08:55.187 "copy": true, 00:08:55.187 "nvme_iov_md": false 00:08:55.187 }, 00:08:55.187 "memory_domains": [ 00:08:55.187 { 00:08:55.187 "dma_device_id": "system", 00:08:55.187 "dma_device_type": 1 00:08:55.187 }, 00:08:55.187 { 00:08:55.187 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:08:55.187 "dma_device_type": 2 00:08:55.187 } 00:08:55.187 ], 00:08:55.187 "driver_specific": { 00:08:55.187 "passthru": { 00:08:55.187 "name": "pt2", 00:08:55.187 "base_bdev_name": "malloc2" 00:08:55.187 } 00:08:55.187 } 00:08:55.187 }' 00:08:55.187 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:55.187 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:08:55.187 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:08:55.187 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:55.187 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:08:55.187 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:08:55.187 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:55.187 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:08:55.187 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:08:55.187 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:55.187 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:08:55.187 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:08:55.187 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:08:55.187 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:55.444 [2024-07-14 21:08:06.884400] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:55.444 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 26b82a47-4225-11ef-aa83-81fbc7dfef58 '!=' 26b82a47-4225-11ef-aa83-81fbc7dfef58 ']' 00:08:55.444 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:08:55.444 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:08:55.444 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:08:55.444 21:08:06 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:08:55.702 [2024-07-14 21:08:07.176423] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:08:55.702 21:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:55.702 21:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:55.702 21:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:55.702 21:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:55.702 21:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:55.702 21:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:55.702 21:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:55.702 21:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:55.702 21:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:55.702 21:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:55.702 21:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:55.702 21:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:55.959 21:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:55.959 "name": "raid_bdev1", 00:08:55.959 "uuid": "26b82a47-4225-11ef-aa83-81fbc7dfef58", 00:08:55.959 "strip_size_kb": 0, 00:08:55.959 "state": "online", 00:08:55.959 "raid_level": "raid1", 00:08:55.959 "superblock": true, 00:08:55.959 "num_base_bdevs": 2, 00:08:55.959 "num_base_bdevs_discovered": 1, 00:08:55.959 "num_base_bdevs_operational": 1, 00:08:55.959 "base_bdevs_list": [ 00:08:55.959 { 00:08:55.959 "name": null, 00:08:55.959 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:55.959 "is_configured": false, 00:08:55.959 "data_offset": 2048, 00:08:55.959 "data_size": 63488 00:08:55.959 }, 00:08:55.959 { 00:08:55.959 "name": "pt2", 00:08:55.959 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:55.959 "is_configured": true, 00:08:55.959 "data_offset": 2048, 00:08:55.959 "data_size": 63488 00:08:55.959 } 00:08:55.959 ] 00:08:55.959 }' 00:08:55.959 21:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:55.959 21:08:07 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:56.523 21:08:07 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:56.523 [2024-07-14 21:08:08.020390] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:56.523 [2024-07-14 21:08:08.020424] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:56.523 [2024-07-14 21:08:08.020455] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:56.523 [2024-07-14 21:08:08.020471] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:56.523 [2024-07-14 21:08:08.020477] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xeda64e35180 name raid_bdev1, state offline 00:08:56.523 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:56.523 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:08:56.842 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:08:56.842 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:08:56.842 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:08:56.842 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:08:56.842 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:08:57.108 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:08:57.108 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:08:57.108 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:08:57.108 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:08:57.108 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=1 00:08:57.108 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:08:57.366 [2024-07-14 21:08:08.780432] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:08:57.366 [2024-07-14 21:08:08.780514] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:57.366 [2024-07-14 21:08:08.780529] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeda64e34f00 00:08:57.366 [2024-07-14 21:08:08.780538] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:57.366 [2024-07-14 21:08:08.781471] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:57.366 [2024-07-14 21:08:08.781500] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:08:57.366 [2024-07-14 21:08:08.781532] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:08:57.366 [2024-07-14 21:08:08.781548] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:57.366 [2024-07-14 21:08:08.781579] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xeda64e35180 00:08:57.366 [2024-07-14 21:08:08.781584] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:57.366 [2024-07-14 21:08:08.781604] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xeda64e97e20 00:08:57.366 [2024-07-14 21:08:08.781656] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xeda64e35180 00:08:57.366 [2024-07-14 21:08:08.781662] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xeda64e35180 00:08:57.366 [2024-07-14 21:08:08.781684] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:57.366 pt2 00:08:57.366 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:57.366 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:57.366 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:57.366 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:57.366 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:57.366 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:57.366 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:57.366 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:57.366 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:57.366 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:57.366 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:57.366 21:08:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:57.624 21:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:57.624 "name": "raid_bdev1", 00:08:57.624 "uuid": "26b82a47-4225-11ef-aa83-81fbc7dfef58", 00:08:57.624 "strip_size_kb": 0, 00:08:57.624 "state": "online", 00:08:57.624 "raid_level": "raid1", 00:08:57.624 "superblock": true, 00:08:57.624 "num_base_bdevs": 2, 00:08:57.624 "num_base_bdevs_discovered": 1, 00:08:57.624 "num_base_bdevs_operational": 1, 00:08:57.624 "base_bdevs_list": [ 00:08:57.624 { 00:08:57.624 "name": null, 00:08:57.624 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:57.624 "is_configured": false, 00:08:57.624 "data_offset": 2048, 00:08:57.624 "data_size": 63488 00:08:57.624 }, 00:08:57.624 { 00:08:57.624 "name": "pt2", 00:08:57.624 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:57.624 "is_configured": true, 00:08:57.624 "data_offset": 2048, 00:08:57.624 "data_size": 63488 00:08:57.624 } 00:08:57.624 ] 00:08:57.624 }' 00:08:57.624 21:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:57.624 21:08:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:57.882 21:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:08:58.140 [2024-07-14 21:08:09.620421] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.140 [2024-07-14 21:08:09.620458] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:08:58.140 [2024-07-14 21:08:09.620489] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:58.140 [2024-07-14 21:08:09.620505] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:58.140 [2024-07-14 21:08:09.620510] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xeda64e35180 name raid_bdev1, state offline 00:08:58.140 21:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:58.140 21:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:08:58.397 21:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:08:58.398 21:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:08:58.398 21:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:08:58.398 21:08:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:08:58.656 [2024-07-14 21:08:10.164425] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:08:58.656 [2024-07-14 21:08:10.164501] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:08:58.656 [2024-07-14 21:08:10.164514] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xeda64e34c80 00:08:58.656 [2024-07-14 21:08:10.164522] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:08:58.656 [2024-07-14 21:08:10.165468] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:08:58.656 [2024-07-14 21:08:10.165497] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:08:58.656 [2024-07-14 21:08:10.165528] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:08:58.656 [2024-07-14 21:08:10.165548] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:08:58.656 [2024-07-14 21:08:10.165592] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:08:58.656 [2024-07-14 21:08:10.165597] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:08:58.656 [2024-07-14 21:08:10.165603] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xeda64e34780 name raid_bdev1, state configuring 00:08:58.656 [2024-07-14 21:08:10.165611] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:08:58.656 [2024-07-14 21:08:10.165628] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xeda64e34780 00:08:58.656 [2024-07-14 21:08:10.165632] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:08:58.656 [2024-07-14 21:08:10.165652] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xeda64e97e20 00:08:58.656 [2024-07-14 21:08:10.165716] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xeda64e34780 00:08:58.656 [2024-07-14 21:08:10.165722] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xeda64e34780 00:08:58.656 [2024-07-14 21:08:10.165742] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:08:58.656 pt1 00:08:58.656 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:08:58.656 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:08:58.656 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:08:58.656 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:08:58.656 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:08:58.656 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:08:58.656 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:08:58.656 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:08:58.656 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:08:58.656 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:08:58.656 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:08:58.656 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:08:58.656 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:08:58.915 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:08:58.915 "name": "raid_bdev1", 00:08:58.915 "uuid": "26b82a47-4225-11ef-aa83-81fbc7dfef58", 00:08:58.915 "strip_size_kb": 0, 00:08:58.915 "state": "online", 00:08:58.915 "raid_level": "raid1", 00:08:58.915 "superblock": true, 00:08:58.915 "num_base_bdevs": 2, 00:08:58.915 "num_base_bdevs_discovered": 1, 00:08:58.915 "num_base_bdevs_operational": 1, 00:08:58.915 "base_bdevs_list": [ 00:08:58.915 { 00:08:58.915 "name": null, 00:08:58.915 "uuid": "00000000-0000-0000-0000-000000000000", 00:08:58.915 "is_configured": false, 00:08:58.915 "data_offset": 2048, 00:08:58.915 "data_size": 63488 00:08:58.915 }, 00:08:58.915 { 00:08:58.915 "name": "pt2", 00:08:58.915 "uuid": "00000000-0000-0000-0000-000000000002", 00:08:58.915 "is_configured": true, 00:08:58.915 "data_offset": 2048, 00:08:58.915 "data_size": 63488 00:08:58.915 } 00:08:58.915 ] 00:08:58.915 }' 00:08:58.915 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:08:58.915 21:08:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.173 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:08:59.173 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:08:59.431 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:08:59.431 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:08:59.431 21:08:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:08:59.689 [2024-07-14 21:08:11.156465] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:08:59.689 21:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 26b82a47-4225-11ef-aa83-81fbc7dfef58 '!=' 26b82a47-4225-11ef-aa83-81fbc7dfef58 ']' 00:08:59.689 21:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 51280 00:08:59.689 21:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 51280 ']' 00:08:59.689 21:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 51280 00:08:59.689 21:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:08:59.689 21:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:08:59.689 21:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 51280 00:08:59.689 21:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:08:59.689 21:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:08:59.689 killing process with pid 51280 00:08:59.689 21:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:08:59.689 21:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51280' 00:08:59.689 21:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 51280 00:08:59.689 [2024-07-14 21:08:11.184723] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:08:59.689 [2024-07-14 21:08:11.184753] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:08:59.689 [2024-07-14 21:08:11.184768] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:08:59.689 21:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 51280 00:08:59.689 [2024-07-14 21:08:11.184772] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xeda64e34780 name raid_bdev1, state offline 00:08:59.689 [2024-07-14 21:08:11.201986] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:08:59.947 21:08:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:08:59.947 00:08:59.947 real 0m13.527s 00:08:59.947 user 0m24.159s 00:08:59.947 sys 0m2.054s 00:08:59.947 21:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:59.947 ************************************ 00:08:59.947 END TEST raid_superblock_test 00:08:59.947 ************************************ 00:08:59.947 21:08:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:08:59.947 21:08:11 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:08:59.947 21:08:11 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 2 read 00:08:59.947 21:08:11 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:08:59.947 21:08:11 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:59.947 21:08:11 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 ************************************ 00:09:00.206 START TEST raid_read_error_test 00:09:00.206 ************************************ 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 read 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.RCEcZqg2zs 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51673 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51673 /var/tmp/spdk-raid.sock 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 51673 ']' 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:00.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:00.206 21:08:11 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:00.206 [2024-07-14 21:08:11.514171] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:00.206 [2024-07-14 21:08:11.514345] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:00.782 EAL: TSC is not safe to use in SMP mode 00:09:00.782 EAL: TSC is not invariant 00:09:00.782 [2024-07-14 21:08:12.083386] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:00.782 [2024-07-14 21:08:12.188858] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:00.782 [2024-07-14 21:08:12.191464] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.782 [2024-07-14 21:08:12.192514] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:00.782 [2024-07-14 21:08:12.192534] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:01.040 21:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:01.040 21:08:12 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:09:01.040 21:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:01.040 21:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:01.298 BaseBdev1_malloc 00:09:01.298 21:08:12 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:01.556 true 00:09:01.556 21:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:01.813 [2024-07-14 21:08:13.319010] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:01.813 [2024-07-14 21:08:13.319097] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:01.813 [2024-07-14 21:08:13.319133] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f8e16e34780 00:09:01.813 [2024-07-14 21:08:13.319146] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:01.813 [2024-07-14 21:08:13.319900] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:01.813 [2024-07-14 21:08:13.319946] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:01.813 BaseBdev1 00:09:01.813 21:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:01.813 21:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:02.378 BaseBdev2_malloc 00:09:02.378 21:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:02.378 true 00:09:02.637 21:08:13 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:02.637 [2024-07-14 21:08:14.183021] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:02.637 [2024-07-14 21:08:14.183092] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:02.637 [2024-07-14 21:08:14.183140] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2f8e16e34c80 00:09:02.637 [2024-07-14 21:08:14.183149] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:02.637 [2024-07-14 21:08:14.183843] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:02.637 [2024-07-14 21:08:14.183902] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:02.895 BaseBdev2 00:09:02.895 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:09:02.895 [2024-07-14 21:08:14.407013] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:02.895 [2024-07-14 21:08:14.407594] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:02.895 [2024-07-14 21:08:14.407659] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2f8e16e34f00 00:09:02.895 [2024-07-14 21:08:14.407666] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:02.895 [2024-07-14 21:08:14.407698] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f8e16ea0e20 00:09:02.895 [2024-07-14 21:08:14.407770] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2f8e16e34f00 00:09:02.895 [2024-07-14 21:08:14.407775] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2f8e16e34f00 00:09:02.895 [2024-07-14 21:08:14.407803] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:02.895 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:02.895 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:02.895 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:02.895 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:02.895 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:02.895 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:02.895 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:02.895 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:02.895 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:02.895 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:02.895 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:02.895 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:03.461 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:03.461 "name": "raid_bdev1", 00:09:03.461 "uuid": "2f26cdaf-4225-11ef-aa83-81fbc7dfef58", 00:09:03.461 "strip_size_kb": 0, 00:09:03.461 "state": "online", 00:09:03.461 "raid_level": "raid1", 00:09:03.461 "superblock": true, 00:09:03.461 "num_base_bdevs": 2, 00:09:03.461 "num_base_bdevs_discovered": 2, 00:09:03.461 "num_base_bdevs_operational": 2, 00:09:03.461 "base_bdevs_list": [ 00:09:03.461 { 00:09:03.461 "name": "BaseBdev1", 00:09:03.461 "uuid": "a4922168-0eac-d453-a7d9-8d8629c4ee87", 00:09:03.461 "is_configured": true, 00:09:03.461 "data_offset": 2048, 00:09:03.461 "data_size": 63488 00:09:03.461 }, 00:09:03.461 { 00:09:03.461 "name": "BaseBdev2", 00:09:03.461 "uuid": "67a39c0a-56f5-bd5e-aa59-7239487fc925", 00:09:03.461 "is_configured": true, 00:09:03.461 "data_offset": 2048, 00:09:03.461 "data_size": 63488 00:09:03.461 } 00:09:03.461 ] 00:09:03.461 }' 00:09:03.461 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:03.461 21:08:14 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:03.461 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:09:03.461 21:08:14 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:03.719 [2024-07-14 21:08:15.119231] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2f8e16ea0ec0 00:09:04.653 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=2 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:04.911 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:05.169 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:05.169 "name": "raid_bdev1", 00:09:05.169 "uuid": "2f26cdaf-4225-11ef-aa83-81fbc7dfef58", 00:09:05.169 "strip_size_kb": 0, 00:09:05.169 "state": "online", 00:09:05.169 "raid_level": "raid1", 00:09:05.169 "superblock": true, 00:09:05.169 "num_base_bdevs": 2, 00:09:05.169 "num_base_bdevs_discovered": 2, 00:09:05.169 "num_base_bdevs_operational": 2, 00:09:05.169 "base_bdevs_list": [ 00:09:05.169 { 00:09:05.169 "name": "BaseBdev1", 00:09:05.169 "uuid": "a4922168-0eac-d453-a7d9-8d8629c4ee87", 00:09:05.169 "is_configured": true, 00:09:05.169 "data_offset": 2048, 00:09:05.169 "data_size": 63488 00:09:05.169 }, 00:09:05.169 { 00:09:05.169 "name": "BaseBdev2", 00:09:05.169 "uuid": "67a39c0a-56f5-bd5e-aa59-7239487fc925", 00:09:05.169 "is_configured": true, 00:09:05.169 "data_offset": 2048, 00:09:05.169 "data_size": 63488 00:09:05.169 } 00:09:05.169 ] 00:09:05.169 }' 00:09:05.169 21:08:16 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:05.169 21:08:16 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:05.735 21:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:05.993 [2024-07-14 21:08:17.287678] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:05.993 [2024-07-14 21:08:17.287720] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:05.993 [2024-07-14 21:08:17.288101] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:05.993 [2024-07-14 21:08:17.288111] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:05.993 [2024-07-14 21:08:17.288125] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:05.993 [2024-07-14 21:08:17.288129] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2f8e16e34f00 name raid_bdev1, state offline 00:09:05.993 0 00:09:05.993 21:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51673 00:09:05.993 21:08:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 51673 ']' 00:09:05.993 21:08:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 51673 00:09:05.993 21:08:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:09:05.993 21:08:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:05.993 21:08:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 51673 00:09:05.993 21:08:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:09:05.993 21:08:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:09:05.993 killing process with pid 51673 00:09:05.993 21:08:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:09:05.993 21:08:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51673' 00:09:05.993 21:08:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 51673 00:09:05.993 [2024-07-14 21:08:17.316630] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:05.993 21:08:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 51673 00:09:05.993 [2024-07-14 21:08:17.328016] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:05.994 21:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.RCEcZqg2zs 00:09:05.994 21:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:09:05.994 21:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:09:05.994 21:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:09:05.994 21:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:09:05.994 21:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:05.994 21:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:09:05.994 21:08:17 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:05.994 00:09:05.994 real 0m6.022s 00:09:05.994 user 0m9.196s 00:09:05.994 sys 0m1.140s 00:09:05.994 21:08:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.994 ************************************ 00:09:05.994 END TEST raid_read_error_test 00:09:05.994 ************************************ 00:09:05.994 21:08:17 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.252 21:08:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:06.252 21:08:17 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 2 write 00:09:06.252 21:08:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:06.252 21:08:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:06.252 21:08:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:06.252 ************************************ 00:09:06.252 START TEST raid_write_error_test 00:09:06.252 ************************************ 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 2 write 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=2 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.15d9yhIFxf 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=51797 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 51797 /var/tmp/spdk-raid.sock 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 51797 ']' 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:06.252 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:06.252 21:08:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:06.252 [2024-07-14 21:08:17.588215] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:06.252 [2024-07-14 21:08:17.588505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:06.873 EAL: TSC is not safe to use in SMP mode 00:09:06.873 EAL: TSC is not invariant 00:09:06.873 [2024-07-14 21:08:18.187293] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:06.873 [2024-07-14 21:08:18.287436] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:06.873 [2024-07-14 21:08:18.289783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:06.873 [2024-07-14 21:08:18.290598] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:06.873 [2024-07-14 21:08:18.290612] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:07.132 21:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:07.132 21:08:18 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:09:07.132 21:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:07.132 21:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:09:07.390 BaseBdev1_malloc 00:09:07.390 21:08:18 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:09:07.648 true 00:09:07.648 21:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:09:07.906 [2024-07-14 21:08:19.416020] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:09:07.906 [2024-07-14 21:08:19.416090] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:07.906 [2024-07-14 21:08:19.416119] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf23c6634780 00:09:07.906 [2024-07-14 21:08:19.416128] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:07.906 [2024-07-14 21:08:19.416805] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:07.906 [2024-07-14 21:08:19.416829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:09:07.906 BaseBdev1 00:09:07.906 21:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:09:07.906 21:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:09:08.165 BaseBdev2_malloc 00:09:08.165 21:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:09:08.423 true 00:09:08.423 21:08:19 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:09:08.682 [2024-07-14 21:08:20.224130] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:09:08.682 [2024-07-14 21:08:20.224199] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:09:08.682 [2024-07-14 21:08:20.224257] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf23c6634c80 00:09:08.682 [2024-07-14 21:08:20.224266] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:09:08.682 [2024-07-14 21:08:20.225061] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:09:08.682 [2024-07-14 21:08:20.225082] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:09:08.682 BaseBdev2 00:09:08.941 21:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 -s 00:09:08.941 [2024-07-14 21:08:20.468228] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:08.941 [2024-07-14 21:08:20.468950] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:08.941 [2024-07-14 21:08:20.469015] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf23c6634f00 00:09:08.941 [2024-07-14 21:08:20.469022] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:09:08.941 [2024-07-14 21:08:20.469065] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf23c66a0e20 00:09:08.941 [2024-07-14 21:08:20.469164] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf23c6634f00 00:09:08.941 [2024-07-14 21:08:20.469169] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf23c6634f00 00:09:08.941 [2024-07-14 21:08:20.469196] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:08.941 21:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:09:08.941 21:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:08.941 21:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:08.941 21:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:08.941 21:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:08.941 21:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:08.941 21:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:08.941 21:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:08.941 21:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:09.200 21:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:09.200 21:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:09.200 21:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:09.459 21:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:09.459 "name": "raid_bdev1", 00:09:09.459 "uuid": "32c3abe2-4225-11ef-aa83-81fbc7dfef58", 00:09:09.459 "strip_size_kb": 0, 00:09:09.459 "state": "online", 00:09:09.459 "raid_level": "raid1", 00:09:09.459 "superblock": true, 00:09:09.459 "num_base_bdevs": 2, 00:09:09.459 "num_base_bdevs_discovered": 2, 00:09:09.459 "num_base_bdevs_operational": 2, 00:09:09.459 "base_bdevs_list": [ 00:09:09.459 { 00:09:09.459 "name": "BaseBdev1", 00:09:09.459 "uuid": "d015b9bf-f0b8-5555-bc2b-55ee72475dac", 00:09:09.459 "is_configured": true, 00:09:09.459 "data_offset": 2048, 00:09:09.459 "data_size": 63488 00:09:09.459 }, 00:09:09.459 { 00:09:09.459 "name": "BaseBdev2", 00:09:09.459 "uuid": "c2543cd7-dd60-565d-99af-b775d5eedaf5", 00:09:09.459 "is_configured": true, 00:09:09.459 "data_offset": 2048, 00:09:09.459 "data_size": 63488 00:09:09.459 } 00:09:09.459 ] 00:09:09.459 }' 00:09:09.459 21:08:20 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:09.459 21:08:20 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:09.717 21:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:09:09.717 21:08:21 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:09:09.717 [2024-07-14 21:08:21.180495] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf23c66a0ec0 00:09:10.650 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:09:10.908 [2024-07-14 21:08:22.380575] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:09:10.908 [2024-07-14 21:08:22.380644] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:10.908 [2024-07-14 21:08:22.380826] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0xf23c66a0ec0 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=1 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:10.908 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:09:11.168 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:11.168 "name": "raid_bdev1", 00:09:11.168 "uuid": "32c3abe2-4225-11ef-aa83-81fbc7dfef58", 00:09:11.168 "strip_size_kb": 0, 00:09:11.168 "state": "online", 00:09:11.168 "raid_level": "raid1", 00:09:11.168 "superblock": true, 00:09:11.168 "num_base_bdevs": 2, 00:09:11.168 "num_base_bdevs_discovered": 1, 00:09:11.168 "num_base_bdevs_operational": 1, 00:09:11.168 "base_bdevs_list": [ 00:09:11.168 { 00:09:11.168 "name": null, 00:09:11.168 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:11.168 "is_configured": false, 00:09:11.168 "data_offset": 2048, 00:09:11.168 "data_size": 63488 00:09:11.168 }, 00:09:11.168 { 00:09:11.168 "name": "BaseBdev2", 00:09:11.168 "uuid": "c2543cd7-dd60-565d-99af-b775d5eedaf5", 00:09:11.168 "is_configured": true, 00:09:11.168 "data_offset": 2048, 00:09:11.168 "data_size": 63488 00:09:11.168 } 00:09:11.168 ] 00:09:11.168 }' 00:09:11.168 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:11.168 21:08:22 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.427 21:08:22 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:09:11.686 [2024-07-14 21:08:23.205632] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:09:11.686 [2024-07-14 21:08:23.205656] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:11.686 [2024-07-14 21:08:23.205977] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:11.686 [2024-07-14 21:08:23.205985] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:11.686 [2024-07-14 21:08:23.205995] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:11.686 [2024-07-14 21:08:23.205999] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf23c6634f00 name raid_bdev1, state offline 00:09:11.686 0 00:09:11.686 21:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 51797 00:09:11.686 21:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 51797 ']' 00:09:11.686 21:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 51797 00:09:11.686 21:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:09:11.686 21:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:11.686 21:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 51797 00:09:11.686 21:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:09:11.686 21:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:09:11.686 killing process with pid 51797 00:09:11.686 21:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:09:11.686 21:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51797' 00:09:11.686 21:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 51797 00:09:11.686 [2024-07-14 21:08:23.231688] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:11.686 21:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 51797 00:09:11.946 [2024-07-14 21:08:23.242213] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:11.946 21:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.15d9yhIFxf 00:09:11.946 21:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:09:11.946 21:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:09:11.946 21:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:09:11.946 21:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:09:11.946 21:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:11.946 21:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:09:11.946 21:08:23 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:09:11.946 00:09:11.946 real 0m5.856s 00:09:11.946 user 0m8.870s 00:09:11.946 sys 0m1.140s 00:09:11.946 21:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:11.946 ************************************ 00:09:11.946 END TEST raid_write_error_test 00:09:11.946 ************************************ 00:09:11.946 21:08:23 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.946 21:08:23 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:11.946 21:08:23 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:09:11.946 21:08:23 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:09:11.946 21:08:23 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 3 false 00:09:11.946 21:08:23 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:11.946 21:08:23 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:11.946 21:08:23 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:11.946 ************************************ 00:09:11.946 START TEST raid_state_function_test 00:09:11.946 ************************************ 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 false 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=51923 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 51923' 00:09:11.946 Process raid pid: 51923 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 51923 /var/tmp/spdk-raid.sock 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 51923 ']' 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:11.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:11.946 21:08:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:11.946 [2024-07-14 21:08:23.485694] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:11.946 [2024-07-14 21:08:23.485907] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:12.514 EAL: TSC is not safe to use in SMP mode 00:09:12.514 EAL: TSC is not invariant 00:09:12.514 [2024-07-14 21:08:24.029105] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:12.774 [2024-07-14 21:08:24.118547] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:12.774 [2024-07-14 21:08:24.120775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:12.774 [2024-07-14 21:08:24.121580] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:12.774 [2024-07-14 21:08:24.121596] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:13.032 21:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:13.032 21:08:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:09:13.032 21:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:13.289 [2024-07-14 21:08:24.774716] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:13.289 [2024-07-14 21:08:24.774762] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:13.289 [2024-07-14 21:08:24.774767] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:13.289 [2024-07-14 21:08:24.774793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:13.289 [2024-07-14 21:08:24.774796] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:13.289 [2024-07-14 21:08:24.774803] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:13.289 21:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:13.289 21:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:13.289 21:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:13.289 21:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:13.289 21:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:13.289 21:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:13.289 21:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:13.289 21:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:13.289 21:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:13.289 21:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:13.289 21:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:13.289 21:08:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:13.547 21:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:13.547 "name": "Existed_Raid", 00:09:13.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.547 "strip_size_kb": 64, 00:09:13.547 "state": "configuring", 00:09:13.547 "raid_level": "raid0", 00:09:13.547 "superblock": false, 00:09:13.547 "num_base_bdevs": 3, 00:09:13.547 "num_base_bdevs_discovered": 0, 00:09:13.547 "num_base_bdevs_operational": 3, 00:09:13.547 "base_bdevs_list": [ 00:09:13.547 { 00:09:13.547 "name": "BaseBdev1", 00:09:13.547 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.547 "is_configured": false, 00:09:13.548 "data_offset": 0, 00:09:13.548 "data_size": 0 00:09:13.548 }, 00:09:13.548 { 00:09:13.548 "name": "BaseBdev2", 00:09:13.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.548 "is_configured": false, 00:09:13.548 "data_offset": 0, 00:09:13.548 "data_size": 0 00:09:13.548 }, 00:09:13.548 { 00:09:13.548 "name": "BaseBdev3", 00:09:13.548 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:13.548 "is_configured": false, 00:09:13.548 "data_offset": 0, 00:09:13.548 "data_size": 0 00:09:13.548 } 00:09:13.548 ] 00:09:13.548 }' 00:09:13.548 21:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:13.548 21:08:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:13.806 21:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:14.064 [2024-07-14 21:08:25.570738] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:14.064 [2024-07-14 21:08:25.570757] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e755834500 name Existed_Raid, state configuring 00:09:14.064 21:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:14.631 [2024-07-14 21:08:25.878753] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:14.631 [2024-07-14 21:08:25.878807] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:14.631 [2024-07-14 21:08:25.878812] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:14.631 [2024-07-14 21:08:25.878837] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:14.631 [2024-07-14 21:08:25.878840] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:14.631 [2024-07-14 21:08:25.878847] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:14.631 21:08:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:14.631 [2024-07-14 21:08:26.143793] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:14.631 BaseBdev1 00:09:14.631 21:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:14.631 21:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:14.631 21:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:14.631 21:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:14.631 21:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:14.631 21:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:14.631 21:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:14.889 21:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:15.148 [ 00:09:15.148 { 00:09:15.148 "name": "BaseBdev1", 00:09:15.148 "aliases": [ 00:09:15.148 "36258a26-4225-11ef-aa83-81fbc7dfef58" 00:09:15.148 ], 00:09:15.148 "product_name": "Malloc disk", 00:09:15.148 "block_size": 512, 00:09:15.148 "num_blocks": 65536, 00:09:15.148 "uuid": "36258a26-4225-11ef-aa83-81fbc7dfef58", 00:09:15.148 "assigned_rate_limits": { 00:09:15.148 "rw_ios_per_sec": 0, 00:09:15.148 "rw_mbytes_per_sec": 0, 00:09:15.148 "r_mbytes_per_sec": 0, 00:09:15.148 "w_mbytes_per_sec": 0 00:09:15.148 }, 00:09:15.148 "claimed": true, 00:09:15.148 "claim_type": "exclusive_write", 00:09:15.148 "zoned": false, 00:09:15.148 "supported_io_types": { 00:09:15.148 "read": true, 00:09:15.148 "write": true, 00:09:15.148 "unmap": true, 00:09:15.148 "flush": true, 00:09:15.148 "reset": true, 00:09:15.148 "nvme_admin": false, 00:09:15.148 "nvme_io": false, 00:09:15.148 "nvme_io_md": false, 00:09:15.148 "write_zeroes": true, 00:09:15.148 "zcopy": true, 00:09:15.148 "get_zone_info": false, 00:09:15.148 "zone_management": false, 00:09:15.148 "zone_append": false, 00:09:15.148 "compare": false, 00:09:15.148 "compare_and_write": false, 00:09:15.148 "abort": true, 00:09:15.148 "seek_hole": false, 00:09:15.148 "seek_data": false, 00:09:15.148 "copy": true, 00:09:15.148 "nvme_iov_md": false 00:09:15.148 }, 00:09:15.148 "memory_domains": [ 00:09:15.148 { 00:09:15.148 "dma_device_id": "system", 00:09:15.148 "dma_device_type": 1 00:09:15.148 }, 00:09:15.148 { 00:09:15.148 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:15.148 "dma_device_type": 2 00:09:15.148 } 00:09:15.148 ], 00:09:15.148 "driver_specific": {} 00:09:15.148 } 00:09:15.148 ] 00:09:15.148 21:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:15.148 21:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:15.148 21:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:15.148 21:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:15.148 21:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:15.148 21:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:15.148 21:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:15.148 21:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:15.148 21:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:15.148 21:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:15.148 21:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:15.148 21:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:15.148 21:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:15.406 21:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:15.406 "name": "Existed_Raid", 00:09:15.406 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.406 "strip_size_kb": 64, 00:09:15.406 "state": "configuring", 00:09:15.406 "raid_level": "raid0", 00:09:15.406 "superblock": false, 00:09:15.406 "num_base_bdevs": 3, 00:09:15.406 "num_base_bdevs_discovered": 1, 00:09:15.406 "num_base_bdevs_operational": 3, 00:09:15.406 "base_bdevs_list": [ 00:09:15.406 { 00:09:15.406 "name": "BaseBdev1", 00:09:15.407 "uuid": "36258a26-4225-11ef-aa83-81fbc7dfef58", 00:09:15.407 "is_configured": true, 00:09:15.407 "data_offset": 0, 00:09:15.407 "data_size": 65536 00:09:15.407 }, 00:09:15.407 { 00:09:15.407 "name": "BaseBdev2", 00:09:15.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.407 "is_configured": false, 00:09:15.407 "data_offset": 0, 00:09:15.407 "data_size": 0 00:09:15.407 }, 00:09:15.407 { 00:09:15.407 "name": "BaseBdev3", 00:09:15.407 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:15.407 "is_configured": false, 00:09:15.407 "data_offset": 0, 00:09:15.407 "data_size": 0 00:09:15.407 } 00:09:15.407 ] 00:09:15.407 }' 00:09:15.407 21:08:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:15.407 21:08:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:15.665 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:15.924 [2024-07-14 21:08:27.394815] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:15.924 [2024-07-14 21:08:27.394854] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e755834500 name Existed_Raid, state configuring 00:09:15.924 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:16.184 [2024-07-14 21:08:27.650829] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:16.184 [2024-07-14 21:08:27.651738] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:16.184 [2024-07-14 21:08:27.651787] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:16.184 [2024-07-14 21:08:27.651792] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:16.184 [2024-07-14 21:08:27.651816] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:16.184 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:16.184 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:16.184 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:16.184 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:16.184 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:16.184 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:16.184 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:16.184 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:16.184 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:16.184 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:16.184 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:16.184 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:16.184 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:16.184 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:16.443 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:16.443 "name": "Existed_Raid", 00:09:16.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.443 "strip_size_kb": 64, 00:09:16.443 "state": "configuring", 00:09:16.443 "raid_level": "raid0", 00:09:16.443 "superblock": false, 00:09:16.443 "num_base_bdevs": 3, 00:09:16.443 "num_base_bdevs_discovered": 1, 00:09:16.443 "num_base_bdevs_operational": 3, 00:09:16.443 "base_bdevs_list": [ 00:09:16.443 { 00:09:16.443 "name": "BaseBdev1", 00:09:16.443 "uuid": "36258a26-4225-11ef-aa83-81fbc7dfef58", 00:09:16.443 "is_configured": true, 00:09:16.443 "data_offset": 0, 00:09:16.443 "data_size": 65536 00:09:16.443 }, 00:09:16.443 { 00:09:16.443 "name": "BaseBdev2", 00:09:16.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.443 "is_configured": false, 00:09:16.443 "data_offset": 0, 00:09:16.443 "data_size": 0 00:09:16.443 }, 00:09:16.443 { 00:09:16.443 "name": "BaseBdev3", 00:09:16.443 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:16.443 "is_configured": false, 00:09:16.443 "data_offset": 0, 00:09:16.443 "data_size": 0 00:09:16.443 } 00:09:16.443 ] 00:09:16.443 }' 00:09:16.443 21:08:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:16.443 21:08:27 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:16.701 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:16.960 [2024-07-14 21:08:28.479015] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:16.960 BaseBdev2 00:09:16.960 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:16.960 21:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:16.960 21:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:16.960 21:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:16.960 21:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:16.960 21:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:16.960 21:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:17.219 21:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:17.478 [ 00:09:17.478 { 00:09:17.478 "name": "BaseBdev2", 00:09:17.478 "aliases": [ 00:09:17.478 "378a009b-4225-11ef-aa83-81fbc7dfef58" 00:09:17.478 ], 00:09:17.478 "product_name": "Malloc disk", 00:09:17.478 "block_size": 512, 00:09:17.478 "num_blocks": 65536, 00:09:17.478 "uuid": "378a009b-4225-11ef-aa83-81fbc7dfef58", 00:09:17.478 "assigned_rate_limits": { 00:09:17.478 "rw_ios_per_sec": 0, 00:09:17.478 "rw_mbytes_per_sec": 0, 00:09:17.478 "r_mbytes_per_sec": 0, 00:09:17.478 "w_mbytes_per_sec": 0 00:09:17.478 }, 00:09:17.478 "claimed": true, 00:09:17.478 "claim_type": "exclusive_write", 00:09:17.478 "zoned": false, 00:09:17.478 "supported_io_types": { 00:09:17.478 "read": true, 00:09:17.478 "write": true, 00:09:17.478 "unmap": true, 00:09:17.478 "flush": true, 00:09:17.478 "reset": true, 00:09:17.478 "nvme_admin": false, 00:09:17.478 "nvme_io": false, 00:09:17.478 "nvme_io_md": false, 00:09:17.478 "write_zeroes": true, 00:09:17.478 "zcopy": true, 00:09:17.478 "get_zone_info": false, 00:09:17.478 "zone_management": false, 00:09:17.478 "zone_append": false, 00:09:17.478 "compare": false, 00:09:17.478 "compare_and_write": false, 00:09:17.478 "abort": true, 00:09:17.478 "seek_hole": false, 00:09:17.478 "seek_data": false, 00:09:17.478 "copy": true, 00:09:17.478 "nvme_iov_md": false 00:09:17.478 }, 00:09:17.478 "memory_domains": [ 00:09:17.478 { 00:09:17.478 "dma_device_id": "system", 00:09:17.478 "dma_device_type": 1 00:09:17.478 }, 00:09:17.478 { 00:09:17.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:17.478 "dma_device_type": 2 00:09:17.478 } 00:09:17.478 ], 00:09:17.478 "driver_specific": {} 00:09:17.478 } 00:09:17.478 ] 00:09:17.478 21:08:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:17.478 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:17.478 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:17.478 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:17.478 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:17.478 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:17.478 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:17.478 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:17.478 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:17.478 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:17.478 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:17.478 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:17.478 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:17.478 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:17.478 21:08:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:17.739 21:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:17.739 "name": "Existed_Raid", 00:09:17.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.739 "strip_size_kb": 64, 00:09:17.739 "state": "configuring", 00:09:17.739 "raid_level": "raid0", 00:09:17.739 "superblock": false, 00:09:17.739 "num_base_bdevs": 3, 00:09:17.739 "num_base_bdevs_discovered": 2, 00:09:17.739 "num_base_bdevs_operational": 3, 00:09:17.739 "base_bdevs_list": [ 00:09:17.739 { 00:09:17.739 "name": "BaseBdev1", 00:09:17.739 "uuid": "36258a26-4225-11ef-aa83-81fbc7dfef58", 00:09:17.739 "is_configured": true, 00:09:17.739 "data_offset": 0, 00:09:17.739 "data_size": 65536 00:09:17.739 }, 00:09:17.739 { 00:09:17.739 "name": "BaseBdev2", 00:09:17.739 "uuid": "378a009b-4225-11ef-aa83-81fbc7dfef58", 00:09:17.739 "is_configured": true, 00:09:17.739 "data_offset": 0, 00:09:17.739 "data_size": 65536 00:09:17.739 }, 00:09:17.739 { 00:09:17.739 "name": "BaseBdev3", 00:09:17.739 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:17.739 "is_configured": false, 00:09:17.739 "data_offset": 0, 00:09:17.739 "data_size": 0 00:09:17.739 } 00:09:17.739 ] 00:09:17.739 }' 00:09:17.739 21:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:17.739 21:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:17.998 21:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:18.256 [2024-07-14 21:08:29.751059] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:18.256 [2024-07-14 21:08:29.751082] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2e755834a00 00:09:18.256 [2024-07-14 21:08:29.751102] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:18.256 [2024-07-14 21:08:29.751122] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2e755897e20 00:09:18.256 [2024-07-14 21:08:29.751207] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2e755834a00 00:09:18.256 [2024-07-14 21:08:29.751211] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2e755834a00 00:09:18.256 [2024-07-14 21:08:29.751242] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:18.256 BaseBdev3 00:09:18.256 21:08:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:18.256 21:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:18.256 21:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:18.256 21:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:18.256 21:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:18.256 21:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:18.256 21:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:18.515 21:08:29 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:18.773 [ 00:09:18.773 { 00:09:18.773 "name": "BaseBdev3", 00:09:18.773 "aliases": [ 00:09:18.773 "384c1a60-4225-11ef-aa83-81fbc7dfef58" 00:09:18.773 ], 00:09:18.774 "product_name": "Malloc disk", 00:09:18.774 "block_size": 512, 00:09:18.774 "num_blocks": 65536, 00:09:18.774 "uuid": "384c1a60-4225-11ef-aa83-81fbc7dfef58", 00:09:18.774 "assigned_rate_limits": { 00:09:18.774 "rw_ios_per_sec": 0, 00:09:18.774 "rw_mbytes_per_sec": 0, 00:09:18.774 "r_mbytes_per_sec": 0, 00:09:18.774 "w_mbytes_per_sec": 0 00:09:18.774 }, 00:09:18.774 "claimed": true, 00:09:18.774 "claim_type": "exclusive_write", 00:09:18.774 "zoned": false, 00:09:18.774 "supported_io_types": { 00:09:18.774 "read": true, 00:09:18.774 "write": true, 00:09:18.774 "unmap": true, 00:09:18.774 "flush": true, 00:09:18.774 "reset": true, 00:09:18.774 "nvme_admin": false, 00:09:18.774 "nvme_io": false, 00:09:18.774 "nvme_io_md": false, 00:09:18.774 "write_zeroes": true, 00:09:18.774 "zcopy": true, 00:09:18.774 "get_zone_info": false, 00:09:18.774 "zone_management": false, 00:09:18.774 "zone_append": false, 00:09:18.774 "compare": false, 00:09:18.774 "compare_and_write": false, 00:09:18.774 "abort": true, 00:09:18.774 "seek_hole": false, 00:09:18.774 "seek_data": false, 00:09:18.774 "copy": true, 00:09:18.774 "nvme_iov_md": false 00:09:18.774 }, 00:09:18.774 "memory_domains": [ 00:09:18.774 { 00:09:18.774 "dma_device_id": "system", 00:09:18.774 "dma_device_type": 1 00:09:18.774 }, 00:09:18.774 { 00:09:18.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:18.774 "dma_device_type": 2 00:09:18.774 } 00:09:18.774 ], 00:09:18.774 "driver_specific": {} 00:09:18.774 } 00:09:18.774 ] 00:09:18.774 21:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:18.774 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:18.774 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:18.774 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:18.774 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:18.774 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:18.774 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:18.774 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:18.774 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:18.774 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:18.774 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:18.774 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:18.774 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:18.774 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:18.774 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:19.033 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:19.033 "name": "Existed_Raid", 00:09:19.033 "uuid": "384c20a0-4225-11ef-aa83-81fbc7dfef58", 00:09:19.033 "strip_size_kb": 64, 00:09:19.033 "state": "online", 00:09:19.033 "raid_level": "raid0", 00:09:19.033 "superblock": false, 00:09:19.033 "num_base_bdevs": 3, 00:09:19.033 "num_base_bdevs_discovered": 3, 00:09:19.033 "num_base_bdevs_operational": 3, 00:09:19.033 "base_bdevs_list": [ 00:09:19.033 { 00:09:19.033 "name": "BaseBdev1", 00:09:19.033 "uuid": "36258a26-4225-11ef-aa83-81fbc7dfef58", 00:09:19.033 "is_configured": true, 00:09:19.033 "data_offset": 0, 00:09:19.033 "data_size": 65536 00:09:19.033 }, 00:09:19.033 { 00:09:19.033 "name": "BaseBdev2", 00:09:19.033 "uuid": "378a009b-4225-11ef-aa83-81fbc7dfef58", 00:09:19.033 "is_configured": true, 00:09:19.033 "data_offset": 0, 00:09:19.033 "data_size": 65536 00:09:19.033 }, 00:09:19.033 { 00:09:19.033 "name": "BaseBdev3", 00:09:19.033 "uuid": "384c1a60-4225-11ef-aa83-81fbc7dfef58", 00:09:19.033 "is_configured": true, 00:09:19.033 "data_offset": 0, 00:09:19.033 "data_size": 65536 00:09:19.033 } 00:09:19.033 ] 00:09:19.033 }' 00:09:19.033 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:19.033 21:08:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:19.291 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:19.291 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:19.291 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:19.292 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:19.292 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:19.292 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:19.292 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:19.292 21:08:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:19.550 [2024-07-14 21:08:31.023095] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:19.550 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:19.550 "name": "Existed_Raid", 00:09:19.550 "aliases": [ 00:09:19.550 "384c20a0-4225-11ef-aa83-81fbc7dfef58" 00:09:19.550 ], 00:09:19.550 "product_name": "Raid Volume", 00:09:19.550 "block_size": 512, 00:09:19.550 "num_blocks": 196608, 00:09:19.550 "uuid": "384c20a0-4225-11ef-aa83-81fbc7dfef58", 00:09:19.550 "assigned_rate_limits": { 00:09:19.550 "rw_ios_per_sec": 0, 00:09:19.550 "rw_mbytes_per_sec": 0, 00:09:19.550 "r_mbytes_per_sec": 0, 00:09:19.550 "w_mbytes_per_sec": 0 00:09:19.550 }, 00:09:19.550 "claimed": false, 00:09:19.550 "zoned": false, 00:09:19.550 "supported_io_types": { 00:09:19.550 "read": true, 00:09:19.550 "write": true, 00:09:19.550 "unmap": true, 00:09:19.550 "flush": true, 00:09:19.550 "reset": true, 00:09:19.550 "nvme_admin": false, 00:09:19.550 "nvme_io": false, 00:09:19.550 "nvme_io_md": false, 00:09:19.550 "write_zeroes": true, 00:09:19.550 "zcopy": false, 00:09:19.550 "get_zone_info": false, 00:09:19.550 "zone_management": false, 00:09:19.550 "zone_append": false, 00:09:19.550 "compare": false, 00:09:19.550 "compare_and_write": false, 00:09:19.550 "abort": false, 00:09:19.550 "seek_hole": false, 00:09:19.550 "seek_data": false, 00:09:19.550 "copy": false, 00:09:19.550 "nvme_iov_md": false 00:09:19.550 }, 00:09:19.550 "memory_domains": [ 00:09:19.550 { 00:09:19.550 "dma_device_id": "system", 00:09:19.550 "dma_device_type": 1 00:09:19.550 }, 00:09:19.550 { 00:09:19.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.550 "dma_device_type": 2 00:09:19.550 }, 00:09:19.550 { 00:09:19.550 "dma_device_id": "system", 00:09:19.550 "dma_device_type": 1 00:09:19.550 }, 00:09:19.550 { 00:09:19.550 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.550 "dma_device_type": 2 00:09:19.550 }, 00:09:19.550 { 00:09:19.550 "dma_device_id": "system", 00:09:19.550 "dma_device_type": 1 00:09:19.550 }, 00:09:19.551 { 00:09:19.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.551 "dma_device_type": 2 00:09:19.551 } 00:09:19.551 ], 00:09:19.551 "driver_specific": { 00:09:19.551 "raid": { 00:09:19.551 "uuid": "384c20a0-4225-11ef-aa83-81fbc7dfef58", 00:09:19.551 "strip_size_kb": 64, 00:09:19.551 "state": "online", 00:09:19.551 "raid_level": "raid0", 00:09:19.551 "superblock": false, 00:09:19.551 "num_base_bdevs": 3, 00:09:19.551 "num_base_bdevs_discovered": 3, 00:09:19.551 "num_base_bdevs_operational": 3, 00:09:19.551 "base_bdevs_list": [ 00:09:19.551 { 00:09:19.551 "name": "BaseBdev1", 00:09:19.551 "uuid": "36258a26-4225-11ef-aa83-81fbc7dfef58", 00:09:19.551 "is_configured": true, 00:09:19.551 "data_offset": 0, 00:09:19.551 "data_size": 65536 00:09:19.551 }, 00:09:19.551 { 00:09:19.551 "name": "BaseBdev2", 00:09:19.551 "uuid": "378a009b-4225-11ef-aa83-81fbc7dfef58", 00:09:19.551 "is_configured": true, 00:09:19.551 "data_offset": 0, 00:09:19.551 "data_size": 65536 00:09:19.551 }, 00:09:19.551 { 00:09:19.551 "name": "BaseBdev3", 00:09:19.551 "uuid": "384c1a60-4225-11ef-aa83-81fbc7dfef58", 00:09:19.551 "is_configured": true, 00:09:19.551 "data_offset": 0, 00:09:19.551 "data_size": 65536 00:09:19.551 } 00:09:19.551 ] 00:09:19.551 } 00:09:19.551 } 00:09:19.551 }' 00:09:19.551 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:19.551 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:19.551 BaseBdev2 00:09:19.551 BaseBdev3' 00:09:19.551 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:19.551 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:19.551 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:19.810 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:19.810 "name": "BaseBdev1", 00:09:19.810 "aliases": [ 00:09:19.810 "36258a26-4225-11ef-aa83-81fbc7dfef58" 00:09:19.810 ], 00:09:19.810 "product_name": "Malloc disk", 00:09:19.810 "block_size": 512, 00:09:19.810 "num_blocks": 65536, 00:09:19.810 "uuid": "36258a26-4225-11ef-aa83-81fbc7dfef58", 00:09:19.810 "assigned_rate_limits": { 00:09:19.810 "rw_ios_per_sec": 0, 00:09:19.810 "rw_mbytes_per_sec": 0, 00:09:19.810 "r_mbytes_per_sec": 0, 00:09:19.810 "w_mbytes_per_sec": 0 00:09:19.810 }, 00:09:19.810 "claimed": true, 00:09:19.810 "claim_type": "exclusive_write", 00:09:19.810 "zoned": false, 00:09:19.810 "supported_io_types": { 00:09:19.810 "read": true, 00:09:19.810 "write": true, 00:09:19.810 "unmap": true, 00:09:19.810 "flush": true, 00:09:19.810 "reset": true, 00:09:19.810 "nvme_admin": false, 00:09:19.810 "nvme_io": false, 00:09:19.810 "nvme_io_md": false, 00:09:19.810 "write_zeroes": true, 00:09:19.810 "zcopy": true, 00:09:19.810 "get_zone_info": false, 00:09:19.810 "zone_management": false, 00:09:19.810 "zone_append": false, 00:09:19.810 "compare": false, 00:09:19.810 "compare_and_write": false, 00:09:19.810 "abort": true, 00:09:19.810 "seek_hole": false, 00:09:19.810 "seek_data": false, 00:09:19.810 "copy": true, 00:09:19.810 "nvme_iov_md": false 00:09:19.810 }, 00:09:19.810 "memory_domains": [ 00:09:19.810 { 00:09:19.810 "dma_device_id": "system", 00:09:19.810 "dma_device_type": 1 00:09:19.810 }, 00:09:19.810 { 00:09:19.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:19.810 "dma_device_type": 2 00:09:19.810 } 00:09:19.810 ], 00:09:19.810 "driver_specific": {} 00:09:19.810 }' 00:09:19.810 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:19.810 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:19.810 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:19.810 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:20.069 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:20.069 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:20.069 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:20.069 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:20.069 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:20.069 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:20.069 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:20.069 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:20.069 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:20.069 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:20.069 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:20.329 "name": "BaseBdev2", 00:09:20.329 "aliases": [ 00:09:20.329 "378a009b-4225-11ef-aa83-81fbc7dfef58" 00:09:20.329 ], 00:09:20.329 "product_name": "Malloc disk", 00:09:20.329 "block_size": 512, 00:09:20.329 "num_blocks": 65536, 00:09:20.329 "uuid": "378a009b-4225-11ef-aa83-81fbc7dfef58", 00:09:20.329 "assigned_rate_limits": { 00:09:20.329 "rw_ios_per_sec": 0, 00:09:20.329 "rw_mbytes_per_sec": 0, 00:09:20.329 "r_mbytes_per_sec": 0, 00:09:20.329 "w_mbytes_per_sec": 0 00:09:20.329 }, 00:09:20.329 "claimed": true, 00:09:20.329 "claim_type": "exclusive_write", 00:09:20.329 "zoned": false, 00:09:20.329 "supported_io_types": { 00:09:20.329 "read": true, 00:09:20.329 "write": true, 00:09:20.329 "unmap": true, 00:09:20.329 "flush": true, 00:09:20.329 "reset": true, 00:09:20.329 "nvme_admin": false, 00:09:20.329 "nvme_io": false, 00:09:20.329 "nvme_io_md": false, 00:09:20.329 "write_zeroes": true, 00:09:20.329 "zcopy": true, 00:09:20.329 "get_zone_info": false, 00:09:20.329 "zone_management": false, 00:09:20.329 "zone_append": false, 00:09:20.329 "compare": false, 00:09:20.329 "compare_and_write": false, 00:09:20.329 "abort": true, 00:09:20.329 "seek_hole": false, 00:09:20.329 "seek_data": false, 00:09:20.329 "copy": true, 00:09:20.329 "nvme_iov_md": false 00:09:20.329 }, 00:09:20.329 "memory_domains": [ 00:09:20.329 { 00:09:20.329 "dma_device_id": "system", 00:09:20.329 "dma_device_type": 1 00:09:20.329 }, 00:09:20.329 { 00:09:20.329 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.329 "dma_device_type": 2 00:09:20.329 } 00:09:20.329 ], 00:09:20.329 "driver_specific": {} 00:09:20.329 }' 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:20.329 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:20.588 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:20.588 "name": "BaseBdev3", 00:09:20.588 "aliases": [ 00:09:20.588 "384c1a60-4225-11ef-aa83-81fbc7dfef58" 00:09:20.588 ], 00:09:20.588 "product_name": "Malloc disk", 00:09:20.588 "block_size": 512, 00:09:20.588 "num_blocks": 65536, 00:09:20.588 "uuid": "384c1a60-4225-11ef-aa83-81fbc7dfef58", 00:09:20.588 "assigned_rate_limits": { 00:09:20.588 "rw_ios_per_sec": 0, 00:09:20.588 "rw_mbytes_per_sec": 0, 00:09:20.588 "r_mbytes_per_sec": 0, 00:09:20.588 "w_mbytes_per_sec": 0 00:09:20.588 }, 00:09:20.588 "claimed": true, 00:09:20.588 "claim_type": "exclusive_write", 00:09:20.588 "zoned": false, 00:09:20.588 "supported_io_types": { 00:09:20.588 "read": true, 00:09:20.588 "write": true, 00:09:20.588 "unmap": true, 00:09:20.588 "flush": true, 00:09:20.588 "reset": true, 00:09:20.588 "nvme_admin": false, 00:09:20.588 "nvme_io": false, 00:09:20.588 "nvme_io_md": false, 00:09:20.588 "write_zeroes": true, 00:09:20.588 "zcopy": true, 00:09:20.588 "get_zone_info": false, 00:09:20.588 "zone_management": false, 00:09:20.588 "zone_append": false, 00:09:20.588 "compare": false, 00:09:20.588 "compare_and_write": false, 00:09:20.588 "abort": true, 00:09:20.588 "seek_hole": false, 00:09:20.588 "seek_data": false, 00:09:20.588 "copy": true, 00:09:20.588 "nvme_iov_md": false 00:09:20.588 }, 00:09:20.588 "memory_domains": [ 00:09:20.588 { 00:09:20.588 "dma_device_id": "system", 00:09:20.588 "dma_device_type": 1 00:09:20.588 }, 00:09:20.588 { 00:09:20.588 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:20.588 "dma_device_type": 2 00:09:20.588 } 00:09:20.588 ], 00:09:20.588 "driver_specific": {} 00:09:20.588 }' 00:09:20.588 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:20.588 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:20.588 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:20.588 21:08:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:20.588 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:20.588 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:20.588 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:20.588 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:20.588 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:20.588 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:20.588 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:20.588 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:20.588 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:20.848 [2024-07-14 21:08:32.251126] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:20.848 [2024-07-14 21:08:32.251146] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:20.848 [2024-07-14 21:08:32.251174] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:20.848 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:21.107 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:21.107 "name": "Existed_Raid", 00:09:21.107 "uuid": "384c20a0-4225-11ef-aa83-81fbc7dfef58", 00:09:21.107 "strip_size_kb": 64, 00:09:21.107 "state": "offline", 00:09:21.107 "raid_level": "raid0", 00:09:21.107 "superblock": false, 00:09:21.107 "num_base_bdevs": 3, 00:09:21.107 "num_base_bdevs_discovered": 2, 00:09:21.107 "num_base_bdevs_operational": 2, 00:09:21.107 "base_bdevs_list": [ 00:09:21.107 { 00:09:21.107 "name": null, 00:09:21.107 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:21.107 "is_configured": false, 00:09:21.107 "data_offset": 0, 00:09:21.107 "data_size": 65536 00:09:21.107 }, 00:09:21.107 { 00:09:21.107 "name": "BaseBdev2", 00:09:21.107 "uuid": "378a009b-4225-11ef-aa83-81fbc7dfef58", 00:09:21.107 "is_configured": true, 00:09:21.107 "data_offset": 0, 00:09:21.107 "data_size": 65536 00:09:21.107 }, 00:09:21.107 { 00:09:21.107 "name": "BaseBdev3", 00:09:21.107 "uuid": "384c1a60-4225-11ef-aa83-81fbc7dfef58", 00:09:21.107 "is_configured": true, 00:09:21.107 "data_offset": 0, 00:09:21.107 "data_size": 65536 00:09:21.107 } 00:09:21.107 ] 00:09:21.107 }' 00:09:21.107 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:21.107 21:08:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:21.366 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:21.366 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:21.366 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:21.366 21:08:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:21.625 21:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:21.625 21:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:21.625 21:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:21.885 [2024-07-14 21:08:33.305315] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:21.885 21:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:21.885 21:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:21.885 21:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:21.885 21:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:22.142 21:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:22.142 21:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:22.142 21:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:22.399 [2024-07-14 21:08:33.879295] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:22.399 [2024-07-14 21:08:33.879326] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e755834a00 name Existed_Raid, state offline 00:09:22.399 21:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:22.399 21:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:22.399 21:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:22.399 21:08:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:22.657 21:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:22.657 21:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:22.657 21:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:09:22.657 21:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:09:22.657 21:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:22.657 21:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:22.915 BaseBdev2 00:09:22.915 21:08:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:09:22.915 21:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:22.915 21:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:22.915 21:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:22.915 21:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:22.915 21:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:22.915 21:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:23.481 21:08:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:23.481 [ 00:09:23.481 { 00:09:23.481 "name": "BaseBdev2", 00:09:23.481 "aliases": [ 00:09:23.481 "3b17c61b-4225-11ef-aa83-81fbc7dfef58" 00:09:23.481 ], 00:09:23.481 "product_name": "Malloc disk", 00:09:23.481 "block_size": 512, 00:09:23.481 "num_blocks": 65536, 00:09:23.481 "uuid": "3b17c61b-4225-11ef-aa83-81fbc7dfef58", 00:09:23.481 "assigned_rate_limits": { 00:09:23.481 "rw_ios_per_sec": 0, 00:09:23.481 "rw_mbytes_per_sec": 0, 00:09:23.481 "r_mbytes_per_sec": 0, 00:09:23.481 "w_mbytes_per_sec": 0 00:09:23.481 }, 00:09:23.481 "claimed": false, 00:09:23.481 "zoned": false, 00:09:23.481 "supported_io_types": { 00:09:23.481 "read": true, 00:09:23.481 "write": true, 00:09:23.481 "unmap": true, 00:09:23.481 "flush": true, 00:09:23.481 "reset": true, 00:09:23.481 "nvme_admin": false, 00:09:23.481 "nvme_io": false, 00:09:23.481 "nvme_io_md": false, 00:09:23.481 "write_zeroes": true, 00:09:23.481 "zcopy": true, 00:09:23.481 "get_zone_info": false, 00:09:23.481 "zone_management": false, 00:09:23.481 "zone_append": false, 00:09:23.481 "compare": false, 00:09:23.481 "compare_and_write": false, 00:09:23.481 "abort": true, 00:09:23.481 "seek_hole": false, 00:09:23.481 "seek_data": false, 00:09:23.481 "copy": true, 00:09:23.481 "nvme_iov_md": false 00:09:23.481 }, 00:09:23.481 "memory_domains": [ 00:09:23.482 { 00:09:23.482 "dma_device_id": "system", 00:09:23.482 "dma_device_type": 1 00:09:23.482 }, 00:09:23.482 { 00:09:23.482 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:23.482 "dma_device_type": 2 00:09:23.482 } 00:09:23.482 ], 00:09:23.482 "driver_specific": {} 00:09:23.482 } 00:09:23.482 ] 00:09:23.482 21:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:23.482 21:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:23.482 21:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:23.482 21:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:23.739 BaseBdev3 00:09:23.996 21:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:09:23.996 21:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:23.996 21:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:23.996 21:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:23.996 21:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:23.996 21:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:23.996 21:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:24.255 21:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:24.255 [ 00:09:24.255 { 00:09:24.255 "name": "BaseBdev3", 00:09:24.255 "aliases": [ 00:09:24.255 "3b9758f7-4225-11ef-aa83-81fbc7dfef58" 00:09:24.255 ], 00:09:24.255 "product_name": "Malloc disk", 00:09:24.255 "block_size": 512, 00:09:24.255 "num_blocks": 65536, 00:09:24.255 "uuid": "3b9758f7-4225-11ef-aa83-81fbc7dfef58", 00:09:24.255 "assigned_rate_limits": { 00:09:24.255 "rw_ios_per_sec": 0, 00:09:24.255 "rw_mbytes_per_sec": 0, 00:09:24.255 "r_mbytes_per_sec": 0, 00:09:24.255 "w_mbytes_per_sec": 0 00:09:24.255 }, 00:09:24.255 "claimed": false, 00:09:24.255 "zoned": false, 00:09:24.255 "supported_io_types": { 00:09:24.255 "read": true, 00:09:24.255 "write": true, 00:09:24.255 "unmap": true, 00:09:24.255 "flush": true, 00:09:24.255 "reset": true, 00:09:24.255 "nvme_admin": false, 00:09:24.255 "nvme_io": false, 00:09:24.255 "nvme_io_md": false, 00:09:24.255 "write_zeroes": true, 00:09:24.255 "zcopy": true, 00:09:24.255 "get_zone_info": false, 00:09:24.255 "zone_management": false, 00:09:24.255 "zone_append": false, 00:09:24.255 "compare": false, 00:09:24.255 "compare_and_write": false, 00:09:24.255 "abort": true, 00:09:24.255 "seek_hole": false, 00:09:24.255 "seek_data": false, 00:09:24.255 "copy": true, 00:09:24.255 "nvme_iov_md": false 00:09:24.255 }, 00:09:24.255 "memory_domains": [ 00:09:24.255 { 00:09:24.255 "dma_device_id": "system", 00:09:24.255 "dma_device_type": 1 00:09:24.255 }, 00:09:24.255 { 00:09:24.255 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:24.255 "dma_device_type": 2 00:09:24.255 } 00:09:24.255 ], 00:09:24.255 "driver_specific": {} 00:09:24.255 } 00:09:24.255 ] 00:09:24.513 21:08:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:24.513 21:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:24.513 21:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:24.513 21:08:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:24.513 [2024-07-14 21:08:36.045303] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:24.513 [2024-07-14 21:08:36.045365] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:24.513 [2024-07-14 21:08:36.045374] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:24.513 [2024-07-14 21:08:36.046020] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:24.513 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:24.771 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:24.771 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:24.771 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:24.771 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:24.771 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:24.771 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:24.771 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:24.771 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:24.772 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:24.772 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:24.772 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.030 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:25.030 "name": "Existed_Raid", 00:09:25.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.030 "strip_size_kb": 64, 00:09:25.030 "state": "configuring", 00:09:25.030 "raid_level": "raid0", 00:09:25.030 "superblock": false, 00:09:25.030 "num_base_bdevs": 3, 00:09:25.030 "num_base_bdevs_discovered": 2, 00:09:25.030 "num_base_bdevs_operational": 3, 00:09:25.030 "base_bdevs_list": [ 00:09:25.030 { 00:09:25.030 "name": "BaseBdev1", 00:09:25.030 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.030 "is_configured": false, 00:09:25.030 "data_offset": 0, 00:09:25.030 "data_size": 0 00:09:25.030 }, 00:09:25.030 { 00:09:25.030 "name": "BaseBdev2", 00:09:25.030 "uuid": "3b17c61b-4225-11ef-aa83-81fbc7dfef58", 00:09:25.030 "is_configured": true, 00:09:25.030 "data_offset": 0, 00:09:25.030 "data_size": 65536 00:09:25.030 }, 00:09:25.030 { 00:09:25.030 "name": "BaseBdev3", 00:09:25.030 "uuid": "3b9758f7-4225-11ef-aa83-81fbc7dfef58", 00:09:25.030 "is_configured": true, 00:09:25.030 "data_offset": 0, 00:09:25.030 "data_size": 65536 00:09:25.030 } 00:09:25.030 ] 00:09:25.030 }' 00:09:25.030 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:25.030 21:08:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:25.288 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:09:25.546 [2024-07-14 21:08:36.969390] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:25.546 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:25.546 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:25.546 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:25.546 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:25.546 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:25.546 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:25.546 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:25.546 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:25.546 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:25.546 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:25.546 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:25.546 21:08:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:25.805 21:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:25.805 "name": "Existed_Raid", 00:09:25.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.805 "strip_size_kb": 64, 00:09:25.805 "state": "configuring", 00:09:25.805 "raid_level": "raid0", 00:09:25.805 "superblock": false, 00:09:25.805 "num_base_bdevs": 3, 00:09:25.805 "num_base_bdevs_discovered": 1, 00:09:25.805 "num_base_bdevs_operational": 3, 00:09:25.805 "base_bdevs_list": [ 00:09:25.805 { 00:09:25.805 "name": "BaseBdev1", 00:09:25.805 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:25.805 "is_configured": false, 00:09:25.805 "data_offset": 0, 00:09:25.805 "data_size": 0 00:09:25.805 }, 00:09:25.805 { 00:09:25.805 "name": null, 00:09:25.805 "uuid": "3b17c61b-4225-11ef-aa83-81fbc7dfef58", 00:09:25.805 "is_configured": false, 00:09:25.805 "data_offset": 0, 00:09:25.805 "data_size": 65536 00:09:25.805 }, 00:09:25.805 { 00:09:25.805 "name": "BaseBdev3", 00:09:25.805 "uuid": "3b9758f7-4225-11ef-aa83-81fbc7dfef58", 00:09:25.805 "is_configured": true, 00:09:25.805 "data_offset": 0, 00:09:25.805 "data_size": 65536 00:09:25.805 } 00:09:25.805 ] 00:09:25.805 }' 00:09:25.805 21:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:25.805 21:08:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:26.064 21:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:26.064 21:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:26.322 21:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:09:26.322 21:08:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:26.580 [2024-07-14 21:08:38.049639] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:26.580 BaseBdev1 00:09:26.580 21:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:09:26.580 21:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:26.580 21:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:26.580 21:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:26.580 21:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:26.580 21:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:26.580 21:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:26.838 21:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:27.097 [ 00:09:27.097 { 00:09:27.098 "name": "BaseBdev1", 00:09:27.098 "aliases": [ 00:09:27.098 "3d3e5d45-4225-11ef-aa83-81fbc7dfef58" 00:09:27.098 ], 00:09:27.098 "product_name": "Malloc disk", 00:09:27.098 "block_size": 512, 00:09:27.098 "num_blocks": 65536, 00:09:27.098 "uuid": "3d3e5d45-4225-11ef-aa83-81fbc7dfef58", 00:09:27.098 "assigned_rate_limits": { 00:09:27.098 "rw_ios_per_sec": 0, 00:09:27.098 "rw_mbytes_per_sec": 0, 00:09:27.098 "r_mbytes_per_sec": 0, 00:09:27.098 "w_mbytes_per_sec": 0 00:09:27.098 }, 00:09:27.098 "claimed": true, 00:09:27.098 "claim_type": "exclusive_write", 00:09:27.098 "zoned": false, 00:09:27.098 "supported_io_types": { 00:09:27.098 "read": true, 00:09:27.098 "write": true, 00:09:27.098 "unmap": true, 00:09:27.098 "flush": true, 00:09:27.098 "reset": true, 00:09:27.098 "nvme_admin": false, 00:09:27.098 "nvme_io": false, 00:09:27.098 "nvme_io_md": false, 00:09:27.098 "write_zeroes": true, 00:09:27.098 "zcopy": true, 00:09:27.098 "get_zone_info": false, 00:09:27.098 "zone_management": false, 00:09:27.098 "zone_append": false, 00:09:27.098 "compare": false, 00:09:27.098 "compare_and_write": false, 00:09:27.098 "abort": true, 00:09:27.098 "seek_hole": false, 00:09:27.098 "seek_data": false, 00:09:27.098 "copy": true, 00:09:27.098 "nvme_iov_md": false 00:09:27.098 }, 00:09:27.098 "memory_domains": [ 00:09:27.098 { 00:09:27.098 "dma_device_id": "system", 00:09:27.098 "dma_device_type": 1 00:09:27.098 }, 00:09:27.098 { 00:09:27.098 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:27.098 "dma_device_type": 2 00:09:27.098 } 00:09:27.098 ], 00:09:27.098 "driver_specific": {} 00:09:27.098 } 00:09:27.098 ] 00:09:27.098 21:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:27.098 21:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:27.098 21:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:27.098 21:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:27.098 21:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:27.098 21:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:27.098 21:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:27.098 21:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:27.098 21:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:27.098 21:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:27.098 21:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:27.098 21:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:27.098 21:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.356 21:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:27.356 "name": "Existed_Raid", 00:09:27.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:27.356 "strip_size_kb": 64, 00:09:27.356 "state": "configuring", 00:09:27.356 "raid_level": "raid0", 00:09:27.356 "superblock": false, 00:09:27.356 "num_base_bdevs": 3, 00:09:27.356 "num_base_bdevs_discovered": 2, 00:09:27.356 "num_base_bdevs_operational": 3, 00:09:27.356 "base_bdevs_list": [ 00:09:27.356 { 00:09:27.356 "name": "BaseBdev1", 00:09:27.356 "uuid": "3d3e5d45-4225-11ef-aa83-81fbc7dfef58", 00:09:27.356 "is_configured": true, 00:09:27.356 "data_offset": 0, 00:09:27.356 "data_size": 65536 00:09:27.356 }, 00:09:27.356 { 00:09:27.356 "name": null, 00:09:27.356 "uuid": "3b17c61b-4225-11ef-aa83-81fbc7dfef58", 00:09:27.356 "is_configured": false, 00:09:27.356 "data_offset": 0, 00:09:27.356 "data_size": 65536 00:09:27.356 }, 00:09:27.356 { 00:09:27.356 "name": "BaseBdev3", 00:09:27.356 "uuid": "3b9758f7-4225-11ef-aa83-81fbc7dfef58", 00:09:27.356 "is_configured": true, 00:09:27.356 "data_offset": 0, 00:09:27.356 "data_size": 65536 00:09:27.356 } 00:09:27.356 ] 00:09:27.356 }' 00:09:27.356 21:08:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:27.356 21:08:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:27.922 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:27.922 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:27.922 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:27.922 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:28.179 [2024-07-14 21:08:39.653749] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:28.180 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:28.180 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:28.180 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:28.180 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:28.180 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:28.180 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:28.180 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:28.180 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:28.180 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:28.180 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:28.180 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.180 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:28.456 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:28.456 "name": "Existed_Raid", 00:09:28.456 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:28.456 "strip_size_kb": 64, 00:09:28.456 "state": "configuring", 00:09:28.456 "raid_level": "raid0", 00:09:28.456 "superblock": false, 00:09:28.456 "num_base_bdevs": 3, 00:09:28.456 "num_base_bdevs_discovered": 1, 00:09:28.456 "num_base_bdevs_operational": 3, 00:09:28.456 "base_bdevs_list": [ 00:09:28.456 { 00:09:28.456 "name": "BaseBdev1", 00:09:28.456 "uuid": "3d3e5d45-4225-11ef-aa83-81fbc7dfef58", 00:09:28.456 "is_configured": true, 00:09:28.456 "data_offset": 0, 00:09:28.456 "data_size": 65536 00:09:28.456 }, 00:09:28.456 { 00:09:28.456 "name": null, 00:09:28.456 "uuid": "3b17c61b-4225-11ef-aa83-81fbc7dfef58", 00:09:28.456 "is_configured": false, 00:09:28.456 "data_offset": 0, 00:09:28.456 "data_size": 65536 00:09:28.456 }, 00:09:28.456 { 00:09:28.456 "name": null, 00:09:28.456 "uuid": "3b9758f7-4225-11ef-aa83-81fbc7dfef58", 00:09:28.456 "is_configured": false, 00:09:28.456 "data_offset": 0, 00:09:28.456 "data_size": 65536 00:09:28.456 } 00:09:28.456 ] 00:09:28.456 }' 00:09:28.456 21:08:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:28.456 21:08:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:28.749 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:28.749 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:29.314 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:29.314 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:29.314 [2024-07-14 21:08:40.801962] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:29.314 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:29.314 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:29.314 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:29.314 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:29.314 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:29.314 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:29.314 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:29.314 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:29.314 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:29.314 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:29.314 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:29.314 21:08:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:29.573 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:29.573 "name": "Existed_Raid", 00:09:29.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:29.573 "strip_size_kb": 64, 00:09:29.573 "state": "configuring", 00:09:29.573 "raid_level": "raid0", 00:09:29.573 "superblock": false, 00:09:29.573 "num_base_bdevs": 3, 00:09:29.573 "num_base_bdevs_discovered": 2, 00:09:29.573 "num_base_bdevs_operational": 3, 00:09:29.573 "base_bdevs_list": [ 00:09:29.573 { 00:09:29.573 "name": "BaseBdev1", 00:09:29.573 "uuid": "3d3e5d45-4225-11ef-aa83-81fbc7dfef58", 00:09:29.573 "is_configured": true, 00:09:29.573 "data_offset": 0, 00:09:29.573 "data_size": 65536 00:09:29.573 }, 00:09:29.573 { 00:09:29.573 "name": null, 00:09:29.573 "uuid": "3b17c61b-4225-11ef-aa83-81fbc7dfef58", 00:09:29.573 "is_configured": false, 00:09:29.573 "data_offset": 0, 00:09:29.573 "data_size": 65536 00:09:29.573 }, 00:09:29.573 { 00:09:29.573 "name": "BaseBdev3", 00:09:29.573 "uuid": "3b9758f7-4225-11ef-aa83-81fbc7dfef58", 00:09:29.573 "is_configured": true, 00:09:29.573 "data_offset": 0, 00:09:29.573 "data_size": 65536 00:09:29.573 } 00:09:29.573 ] 00:09:29.573 }' 00:09:29.573 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:29.573 21:08:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:30.157 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.157 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:30.414 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:30.414 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:30.414 [2024-07-14 21:08:41.938216] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:30.414 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:30.414 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:30.415 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:30.415 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:30.415 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:30.415 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:30.415 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:30.415 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:30.415 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:30.415 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:30.673 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:30.673 21:08:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:30.931 21:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:30.931 "name": "Existed_Raid", 00:09:30.931 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:30.931 "strip_size_kb": 64, 00:09:30.931 "state": "configuring", 00:09:30.931 "raid_level": "raid0", 00:09:30.931 "superblock": false, 00:09:30.931 "num_base_bdevs": 3, 00:09:30.931 "num_base_bdevs_discovered": 1, 00:09:30.931 "num_base_bdevs_operational": 3, 00:09:30.931 "base_bdevs_list": [ 00:09:30.931 { 00:09:30.931 "name": null, 00:09:30.931 "uuid": "3d3e5d45-4225-11ef-aa83-81fbc7dfef58", 00:09:30.931 "is_configured": false, 00:09:30.931 "data_offset": 0, 00:09:30.931 "data_size": 65536 00:09:30.931 }, 00:09:30.931 { 00:09:30.931 "name": null, 00:09:30.931 "uuid": "3b17c61b-4225-11ef-aa83-81fbc7dfef58", 00:09:30.931 "is_configured": false, 00:09:30.931 "data_offset": 0, 00:09:30.931 "data_size": 65536 00:09:30.931 }, 00:09:30.932 { 00:09:30.932 "name": "BaseBdev3", 00:09:30.932 "uuid": "3b9758f7-4225-11ef-aa83-81fbc7dfef58", 00:09:30.932 "is_configured": true, 00:09:30.932 "data_offset": 0, 00:09:30.932 "data_size": 65536 00:09:30.932 } 00:09:30.932 ] 00:09:30.932 }' 00:09:30.932 21:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:30.932 21:08:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:31.191 21:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:31.191 21:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:31.450 21:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:31.450 21:08:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:31.709 [2024-07-14 21:08:43.052666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:31.709 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:31.709 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:31.709 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:31.709 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:31.709 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:31.709 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:31.709 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:31.709 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:31.709 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:31.709 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:31.709 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:31.709 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:31.967 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:31.967 "name": "Existed_Raid", 00:09:31.967 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:31.967 "strip_size_kb": 64, 00:09:31.967 "state": "configuring", 00:09:31.967 "raid_level": "raid0", 00:09:31.967 "superblock": false, 00:09:31.967 "num_base_bdevs": 3, 00:09:31.967 "num_base_bdevs_discovered": 2, 00:09:31.967 "num_base_bdevs_operational": 3, 00:09:31.967 "base_bdevs_list": [ 00:09:31.967 { 00:09:31.967 "name": null, 00:09:31.967 "uuid": "3d3e5d45-4225-11ef-aa83-81fbc7dfef58", 00:09:31.967 "is_configured": false, 00:09:31.967 "data_offset": 0, 00:09:31.967 "data_size": 65536 00:09:31.967 }, 00:09:31.967 { 00:09:31.967 "name": "BaseBdev2", 00:09:31.967 "uuid": "3b17c61b-4225-11ef-aa83-81fbc7dfef58", 00:09:31.967 "is_configured": true, 00:09:31.967 "data_offset": 0, 00:09:31.967 "data_size": 65536 00:09:31.967 }, 00:09:31.967 { 00:09:31.967 "name": "BaseBdev3", 00:09:31.967 "uuid": "3b9758f7-4225-11ef-aa83-81fbc7dfef58", 00:09:31.967 "is_configured": true, 00:09:31.967 "data_offset": 0, 00:09:31.967 "data_size": 65536 00:09:31.967 } 00:09:31.967 ] 00:09:31.967 }' 00:09:31.967 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:31.967 21:08:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:32.225 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.225 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:32.483 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:32.483 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:32.483 21:08:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:32.741 21:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 3d3e5d45-4225-11ef-aa83-81fbc7dfef58 00:09:32.998 [2024-07-14 21:08:44.348831] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:32.998 [2024-07-14 21:08:44.348856] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2e755834a00 00:09:32.998 [2024-07-14 21:08:44.348860] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:09:32.998 [2024-07-14 21:08:44.348882] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2e755897e20 00:09:32.998 [2024-07-14 21:08:44.348950] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2e755834a00 00:09:32.998 [2024-07-14 21:08:44.348954] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x2e755834a00 00:09:32.998 [2024-07-14 21:08:44.348985] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:32.998 NewBaseBdev 00:09:32.998 21:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:32.998 21:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:09:32.998 21:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:32.998 21:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:09:32.998 21:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:32.998 21:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:32.998 21:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:33.256 21:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:33.514 [ 00:09:33.514 { 00:09:33.514 "name": "NewBaseBdev", 00:09:33.514 "aliases": [ 00:09:33.514 "3d3e5d45-4225-11ef-aa83-81fbc7dfef58" 00:09:33.514 ], 00:09:33.514 "product_name": "Malloc disk", 00:09:33.514 "block_size": 512, 00:09:33.514 "num_blocks": 65536, 00:09:33.514 "uuid": "3d3e5d45-4225-11ef-aa83-81fbc7dfef58", 00:09:33.514 "assigned_rate_limits": { 00:09:33.514 "rw_ios_per_sec": 0, 00:09:33.514 "rw_mbytes_per_sec": 0, 00:09:33.514 "r_mbytes_per_sec": 0, 00:09:33.514 "w_mbytes_per_sec": 0 00:09:33.514 }, 00:09:33.514 "claimed": true, 00:09:33.514 "claim_type": "exclusive_write", 00:09:33.514 "zoned": false, 00:09:33.514 "supported_io_types": { 00:09:33.514 "read": true, 00:09:33.514 "write": true, 00:09:33.514 "unmap": true, 00:09:33.514 "flush": true, 00:09:33.514 "reset": true, 00:09:33.514 "nvme_admin": false, 00:09:33.514 "nvme_io": false, 00:09:33.514 "nvme_io_md": false, 00:09:33.514 "write_zeroes": true, 00:09:33.514 "zcopy": true, 00:09:33.514 "get_zone_info": false, 00:09:33.514 "zone_management": false, 00:09:33.514 "zone_append": false, 00:09:33.514 "compare": false, 00:09:33.514 "compare_and_write": false, 00:09:33.514 "abort": true, 00:09:33.514 "seek_hole": false, 00:09:33.514 "seek_data": false, 00:09:33.514 "copy": true, 00:09:33.514 "nvme_iov_md": false 00:09:33.514 }, 00:09:33.514 "memory_domains": [ 00:09:33.514 { 00:09:33.514 "dma_device_id": "system", 00:09:33.514 "dma_device_type": 1 00:09:33.514 }, 00:09:33.514 { 00:09:33.514 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:33.514 "dma_device_type": 2 00:09:33.514 } 00:09:33.514 ], 00:09:33.514 "driver_specific": {} 00:09:33.514 } 00:09:33.514 ] 00:09:33.514 21:08:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:09:33.514 21:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:33.514 21:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:33.514 21:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:33.514 21:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:33.515 21:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:33.515 21:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:33.515 21:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:33.515 21:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:33.515 21:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:33.515 21:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:33.515 21:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:33.515 21:08:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:33.773 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:33.773 "name": "Existed_Raid", 00:09:33.773 "uuid": "40ff9195-4225-11ef-aa83-81fbc7dfef58", 00:09:33.773 "strip_size_kb": 64, 00:09:33.773 "state": "online", 00:09:33.773 "raid_level": "raid0", 00:09:33.773 "superblock": false, 00:09:33.773 "num_base_bdevs": 3, 00:09:33.773 "num_base_bdevs_discovered": 3, 00:09:33.773 "num_base_bdevs_operational": 3, 00:09:33.773 "base_bdevs_list": [ 00:09:33.773 { 00:09:33.773 "name": "NewBaseBdev", 00:09:33.773 "uuid": "3d3e5d45-4225-11ef-aa83-81fbc7dfef58", 00:09:33.773 "is_configured": true, 00:09:33.773 "data_offset": 0, 00:09:33.773 "data_size": 65536 00:09:33.773 }, 00:09:33.773 { 00:09:33.773 "name": "BaseBdev2", 00:09:33.773 "uuid": "3b17c61b-4225-11ef-aa83-81fbc7dfef58", 00:09:33.773 "is_configured": true, 00:09:33.773 "data_offset": 0, 00:09:33.773 "data_size": 65536 00:09:33.773 }, 00:09:33.773 { 00:09:33.773 "name": "BaseBdev3", 00:09:33.773 "uuid": "3b9758f7-4225-11ef-aa83-81fbc7dfef58", 00:09:33.773 "is_configured": true, 00:09:33.773 "data_offset": 0, 00:09:33.773 "data_size": 65536 00:09:33.773 } 00:09:33.773 ] 00:09:33.773 }' 00:09:33.773 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:33.773 21:08:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:34.032 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:09:34.032 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:34.032 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:34.032 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:34.032 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:34.032 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:09:34.032 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:34.032 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:34.291 [2024-07-14 21:08:45.640783] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:34.291 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:34.291 "name": "Existed_Raid", 00:09:34.291 "aliases": [ 00:09:34.291 "40ff9195-4225-11ef-aa83-81fbc7dfef58" 00:09:34.291 ], 00:09:34.291 "product_name": "Raid Volume", 00:09:34.291 "block_size": 512, 00:09:34.291 "num_blocks": 196608, 00:09:34.291 "uuid": "40ff9195-4225-11ef-aa83-81fbc7dfef58", 00:09:34.291 "assigned_rate_limits": { 00:09:34.291 "rw_ios_per_sec": 0, 00:09:34.291 "rw_mbytes_per_sec": 0, 00:09:34.291 "r_mbytes_per_sec": 0, 00:09:34.291 "w_mbytes_per_sec": 0 00:09:34.291 }, 00:09:34.291 "claimed": false, 00:09:34.291 "zoned": false, 00:09:34.291 "supported_io_types": { 00:09:34.291 "read": true, 00:09:34.291 "write": true, 00:09:34.291 "unmap": true, 00:09:34.291 "flush": true, 00:09:34.291 "reset": true, 00:09:34.291 "nvme_admin": false, 00:09:34.291 "nvme_io": false, 00:09:34.291 "nvme_io_md": false, 00:09:34.291 "write_zeroes": true, 00:09:34.291 "zcopy": false, 00:09:34.291 "get_zone_info": false, 00:09:34.291 "zone_management": false, 00:09:34.291 "zone_append": false, 00:09:34.291 "compare": false, 00:09:34.291 "compare_and_write": false, 00:09:34.291 "abort": false, 00:09:34.291 "seek_hole": false, 00:09:34.291 "seek_data": false, 00:09:34.291 "copy": false, 00:09:34.291 "nvme_iov_md": false 00:09:34.291 }, 00:09:34.291 "memory_domains": [ 00:09:34.291 { 00:09:34.291 "dma_device_id": "system", 00:09:34.291 "dma_device_type": 1 00:09:34.291 }, 00:09:34.291 { 00:09:34.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.291 "dma_device_type": 2 00:09:34.291 }, 00:09:34.291 { 00:09:34.291 "dma_device_id": "system", 00:09:34.291 "dma_device_type": 1 00:09:34.291 }, 00:09:34.291 { 00:09:34.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.291 "dma_device_type": 2 00:09:34.291 }, 00:09:34.291 { 00:09:34.291 "dma_device_id": "system", 00:09:34.291 "dma_device_type": 1 00:09:34.291 }, 00:09:34.291 { 00:09:34.291 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.291 "dma_device_type": 2 00:09:34.291 } 00:09:34.291 ], 00:09:34.291 "driver_specific": { 00:09:34.291 "raid": { 00:09:34.291 "uuid": "40ff9195-4225-11ef-aa83-81fbc7dfef58", 00:09:34.291 "strip_size_kb": 64, 00:09:34.291 "state": "online", 00:09:34.291 "raid_level": "raid0", 00:09:34.291 "superblock": false, 00:09:34.291 "num_base_bdevs": 3, 00:09:34.291 "num_base_bdevs_discovered": 3, 00:09:34.291 "num_base_bdevs_operational": 3, 00:09:34.291 "base_bdevs_list": [ 00:09:34.291 { 00:09:34.291 "name": "NewBaseBdev", 00:09:34.291 "uuid": "3d3e5d45-4225-11ef-aa83-81fbc7dfef58", 00:09:34.291 "is_configured": true, 00:09:34.291 "data_offset": 0, 00:09:34.291 "data_size": 65536 00:09:34.291 }, 00:09:34.291 { 00:09:34.291 "name": "BaseBdev2", 00:09:34.291 "uuid": "3b17c61b-4225-11ef-aa83-81fbc7dfef58", 00:09:34.291 "is_configured": true, 00:09:34.291 "data_offset": 0, 00:09:34.291 "data_size": 65536 00:09:34.291 }, 00:09:34.291 { 00:09:34.291 "name": "BaseBdev3", 00:09:34.291 "uuid": "3b9758f7-4225-11ef-aa83-81fbc7dfef58", 00:09:34.291 "is_configured": true, 00:09:34.291 "data_offset": 0, 00:09:34.291 "data_size": 65536 00:09:34.291 } 00:09:34.291 ] 00:09:34.291 } 00:09:34.291 } 00:09:34.291 }' 00:09:34.291 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:34.291 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:09:34.291 BaseBdev2 00:09:34.291 BaseBdev3' 00:09:34.291 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:34.291 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:09:34.291 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:34.551 "name": "NewBaseBdev", 00:09:34.551 "aliases": [ 00:09:34.551 "3d3e5d45-4225-11ef-aa83-81fbc7dfef58" 00:09:34.551 ], 00:09:34.551 "product_name": "Malloc disk", 00:09:34.551 "block_size": 512, 00:09:34.551 "num_blocks": 65536, 00:09:34.551 "uuid": "3d3e5d45-4225-11ef-aa83-81fbc7dfef58", 00:09:34.551 "assigned_rate_limits": { 00:09:34.551 "rw_ios_per_sec": 0, 00:09:34.551 "rw_mbytes_per_sec": 0, 00:09:34.551 "r_mbytes_per_sec": 0, 00:09:34.551 "w_mbytes_per_sec": 0 00:09:34.551 }, 00:09:34.551 "claimed": true, 00:09:34.551 "claim_type": "exclusive_write", 00:09:34.551 "zoned": false, 00:09:34.551 "supported_io_types": { 00:09:34.551 "read": true, 00:09:34.551 "write": true, 00:09:34.551 "unmap": true, 00:09:34.551 "flush": true, 00:09:34.551 "reset": true, 00:09:34.551 "nvme_admin": false, 00:09:34.551 "nvme_io": false, 00:09:34.551 "nvme_io_md": false, 00:09:34.551 "write_zeroes": true, 00:09:34.551 "zcopy": true, 00:09:34.551 "get_zone_info": false, 00:09:34.551 "zone_management": false, 00:09:34.551 "zone_append": false, 00:09:34.551 "compare": false, 00:09:34.551 "compare_and_write": false, 00:09:34.551 "abort": true, 00:09:34.551 "seek_hole": false, 00:09:34.551 "seek_data": false, 00:09:34.551 "copy": true, 00:09:34.551 "nvme_iov_md": false 00:09:34.551 }, 00:09:34.551 "memory_domains": [ 00:09:34.551 { 00:09:34.551 "dma_device_id": "system", 00:09:34.551 "dma_device_type": 1 00:09:34.551 }, 00:09:34.551 { 00:09:34.551 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.551 "dma_device_type": 2 00:09:34.551 } 00:09:34.551 ], 00:09:34.551 "driver_specific": {} 00:09:34.551 }' 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:34.551 21:08:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:34.810 "name": "BaseBdev2", 00:09:34.810 "aliases": [ 00:09:34.810 "3b17c61b-4225-11ef-aa83-81fbc7dfef58" 00:09:34.810 ], 00:09:34.810 "product_name": "Malloc disk", 00:09:34.810 "block_size": 512, 00:09:34.810 "num_blocks": 65536, 00:09:34.810 "uuid": "3b17c61b-4225-11ef-aa83-81fbc7dfef58", 00:09:34.810 "assigned_rate_limits": { 00:09:34.810 "rw_ios_per_sec": 0, 00:09:34.810 "rw_mbytes_per_sec": 0, 00:09:34.810 "r_mbytes_per_sec": 0, 00:09:34.810 "w_mbytes_per_sec": 0 00:09:34.810 }, 00:09:34.810 "claimed": true, 00:09:34.810 "claim_type": "exclusive_write", 00:09:34.810 "zoned": false, 00:09:34.810 "supported_io_types": { 00:09:34.810 "read": true, 00:09:34.810 "write": true, 00:09:34.810 "unmap": true, 00:09:34.810 "flush": true, 00:09:34.810 "reset": true, 00:09:34.810 "nvme_admin": false, 00:09:34.810 "nvme_io": false, 00:09:34.810 "nvme_io_md": false, 00:09:34.810 "write_zeroes": true, 00:09:34.810 "zcopy": true, 00:09:34.810 "get_zone_info": false, 00:09:34.810 "zone_management": false, 00:09:34.810 "zone_append": false, 00:09:34.810 "compare": false, 00:09:34.810 "compare_and_write": false, 00:09:34.810 "abort": true, 00:09:34.810 "seek_hole": false, 00:09:34.810 "seek_data": false, 00:09:34.810 "copy": true, 00:09:34.810 "nvme_iov_md": false 00:09:34.810 }, 00:09:34.810 "memory_domains": [ 00:09:34.810 { 00:09:34.810 "dma_device_id": "system", 00:09:34.810 "dma_device_type": 1 00:09:34.810 }, 00:09:34.810 { 00:09:34.810 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:34.810 "dma_device_type": 2 00:09:34.810 } 00:09:34.810 ], 00:09:34.810 "driver_specific": {} 00:09:34.810 }' 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:34.810 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:35.069 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:35.069 "name": "BaseBdev3", 00:09:35.069 "aliases": [ 00:09:35.069 "3b9758f7-4225-11ef-aa83-81fbc7dfef58" 00:09:35.069 ], 00:09:35.069 "product_name": "Malloc disk", 00:09:35.069 "block_size": 512, 00:09:35.069 "num_blocks": 65536, 00:09:35.069 "uuid": "3b9758f7-4225-11ef-aa83-81fbc7dfef58", 00:09:35.069 "assigned_rate_limits": { 00:09:35.069 "rw_ios_per_sec": 0, 00:09:35.069 "rw_mbytes_per_sec": 0, 00:09:35.069 "r_mbytes_per_sec": 0, 00:09:35.069 "w_mbytes_per_sec": 0 00:09:35.069 }, 00:09:35.069 "claimed": true, 00:09:35.069 "claim_type": "exclusive_write", 00:09:35.069 "zoned": false, 00:09:35.069 "supported_io_types": { 00:09:35.069 "read": true, 00:09:35.069 "write": true, 00:09:35.069 "unmap": true, 00:09:35.069 "flush": true, 00:09:35.069 "reset": true, 00:09:35.069 "nvme_admin": false, 00:09:35.069 "nvme_io": false, 00:09:35.069 "nvme_io_md": false, 00:09:35.069 "write_zeroes": true, 00:09:35.069 "zcopy": true, 00:09:35.069 "get_zone_info": false, 00:09:35.069 "zone_management": false, 00:09:35.069 "zone_append": false, 00:09:35.069 "compare": false, 00:09:35.069 "compare_and_write": false, 00:09:35.069 "abort": true, 00:09:35.069 "seek_hole": false, 00:09:35.069 "seek_data": false, 00:09:35.069 "copy": true, 00:09:35.069 "nvme_iov_md": false 00:09:35.069 }, 00:09:35.069 "memory_domains": [ 00:09:35.069 { 00:09:35.069 "dma_device_id": "system", 00:09:35.069 "dma_device_type": 1 00:09:35.069 }, 00:09:35.069 { 00:09:35.069 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:35.069 "dma_device_type": 2 00:09:35.069 } 00:09:35.069 ], 00:09:35.069 "driver_specific": {} 00:09:35.069 }' 00:09:35.069 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:35.069 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:35.069 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:35.069 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:35.328 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:35.328 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:35.328 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:35.328 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:35.328 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:35.328 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:35.328 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:35.328 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:35.328 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:35.328 [2024-07-14 21:08:46.860773] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:35.328 [2024-07-14 21:08:46.860793] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:35.328 [2024-07-14 21:08:46.860831] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:35.328 [2024-07-14 21:08:46.860844] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:35.328 [2024-07-14 21:08:46.860849] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2e755834a00 name Existed_Raid, state offline 00:09:35.586 21:08:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 51923 00:09:35.586 21:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 51923 ']' 00:09:35.586 21:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 51923 00:09:35.586 21:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:09:35.586 21:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:35.586 21:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 51923 00:09:35.586 21:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:09:35.586 21:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:35.586 killing process with pid 51923 00:09:35.586 21:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:35.586 21:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 51923' 00:09:35.586 21:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 51923 00:09:35.586 [2024-07-14 21:08:46.887591] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:35.586 21:08:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 51923 00:09:35.586 [2024-07-14 21:08:46.905888] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:35.586 21:08:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:09:35.586 00:09:35.586 real 0m23.605s 00:09:35.586 user 0m43.119s 00:09:35.586 sys 0m3.299s 00:09:35.586 ************************************ 00:09:35.587 END TEST raid_state_function_test 00:09:35.587 ************************************ 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:09:35.587 21:08:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:35.587 21:08:47 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 3 true 00:09:35.587 21:08:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:09:35.587 21:08:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:35.587 21:08:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:35.587 ************************************ 00:09:35.587 START TEST raid_state_function_test_sb 00:09:35.587 ************************************ 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 3 true 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:09:35.587 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=52648 00:09:35.845 Process raid pid: 52648 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 52648' 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 52648 /var/tmp/spdk-raid.sock 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 52648 ']' 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:35.845 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:35.845 21:08:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:35.845 [2024-07-14 21:08:47.145064] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:35.845 [2024-07-14 21:08:47.145360] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:09:36.412 EAL: TSC is not safe to use in SMP mode 00:09:36.412 EAL: TSC is not invariant 00:09:36.412 [2024-07-14 21:08:47.686335] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:36.412 [2024-07-14 21:08:47.775410] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:09:36.412 [2024-07-14 21:08:47.777745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:36.412 [2024-07-14 21:08:47.778601] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.413 [2024-07-14 21:08:47.778617] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:09:36.672 21:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:36.672 21:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:09:36.672 21:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:36.930 [2024-07-14 21:08:48.431229] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:36.930 [2024-07-14 21:08:48.431287] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:36.930 [2024-07-14 21:08:48.431292] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:36.930 [2024-07-14 21:08:48.431316] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:36.930 [2024-07-14 21:08:48.431320] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:36.930 [2024-07-14 21:08:48.431326] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:36.930 21:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:36.930 21:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:36.930 21:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:36.930 21:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:36.930 21:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:36.930 21:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:36.930 21:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:36.930 21:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:36.930 21:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:36.930 21:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:36.930 21:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:36.930 21:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:37.188 21:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:37.188 "name": "Existed_Raid", 00:09:37.188 "uuid": "436e7cae-4225-11ef-aa83-81fbc7dfef58", 00:09:37.188 "strip_size_kb": 64, 00:09:37.188 "state": "configuring", 00:09:37.188 "raid_level": "raid0", 00:09:37.188 "superblock": true, 00:09:37.188 "num_base_bdevs": 3, 00:09:37.188 "num_base_bdevs_discovered": 0, 00:09:37.188 "num_base_bdevs_operational": 3, 00:09:37.188 "base_bdevs_list": [ 00:09:37.188 { 00:09:37.188 "name": "BaseBdev1", 00:09:37.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.188 "is_configured": false, 00:09:37.188 "data_offset": 0, 00:09:37.188 "data_size": 0 00:09:37.188 }, 00:09:37.188 { 00:09:37.188 "name": "BaseBdev2", 00:09:37.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.188 "is_configured": false, 00:09:37.188 "data_offset": 0, 00:09:37.188 "data_size": 0 00:09:37.188 }, 00:09:37.188 { 00:09:37.188 "name": "BaseBdev3", 00:09:37.188 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:37.188 "is_configured": false, 00:09:37.188 "data_offset": 0, 00:09:37.188 "data_size": 0 00:09:37.188 } 00:09:37.188 ] 00:09:37.188 }' 00:09:37.188 21:08:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:37.188 21:08:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:37.754 21:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:37.754 [2024-07-14 21:08:49.207235] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:37.754 [2024-07-14 21:08:49.207255] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3283b6c34500 name Existed_Raid, state configuring 00:09:37.754 21:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:38.012 [2024-07-14 21:08:49.467354] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:38.012 [2024-07-14 21:08:49.467465] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:38.013 [2024-07-14 21:08:49.467485] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:38.013 [2024-07-14 21:08:49.467504] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:38.013 [2024-07-14 21:08:49.467512] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:38.013 [2024-07-14 21:08:49.467528] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:38.013 21:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:38.275 [2024-07-14 21:08:49.680459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:38.275 BaseBdev1 00:09:38.275 21:08:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:09:38.275 21:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:38.275 21:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:38.275 21:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:38.275 21:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:38.275 21:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:38.275 21:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:38.538 21:08:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:38.796 [ 00:09:38.796 { 00:09:38.796 "name": "BaseBdev1", 00:09:38.796 "aliases": [ 00:09:38.796 "442cef1e-4225-11ef-aa83-81fbc7dfef58" 00:09:38.796 ], 00:09:38.796 "product_name": "Malloc disk", 00:09:38.796 "block_size": 512, 00:09:38.796 "num_blocks": 65536, 00:09:38.796 "uuid": "442cef1e-4225-11ef-aa83-81fbc7dfef58", 00:09:38.796 "assigned_rate_limits": { 00:09:38.796 "rw_ios_per_sec": 0, 00:09:38.796 "rw_mbytes_per_sec": 0, 00:09:38.796 "r_mbytes_per_sec": 0, 00:09:38.796 "w_mbytes_per_sec": 0 00:09:38.796 }, 00:09:38.796 "claimed": true, 00:09:38.796 "claim_type": "exclusive_write", 00:09:38.796 "zoned": false, 00:09:38.796 "supported_io_types": { 00:09:38.796 "read": true, 00:09:38.796 "write": true, 00:09:38.796 "unmap": true, 00:09:38.796 "flush": true, 00:09:38.796 "reset": true, 00:09:38.796 "nvme_admin": false, 00:09:38.796 "nvme_io": false, 00:09:38.796 "nvme_io_md": false, 00:09:38.796 "write_zeroes": true, 00:09:38.796 "zcopy": true, 00:09:38.796 "get_zone_info": false, 00:09:38.796 "zone_management": false, 00:09:38.796 "zone_append": false, 00:09:38.796 "compare": false, 00:09:38.796 "compare_and_write": false, 00:09:38.796 "abort": true, 00:09:38.796 "seek_hole": false, 00:09:38.796 "seek_data": false, 00:09:38.796 "copy": true, 00:09:38.796 "nvme_iov_md": false 00:09:38.796 }, 00:09:38.796 "memory_domains": [ 00:09:38.796 { 00:09:38.796 "dma_device_id": "system", 00:09:38.796 "dma_device_type": 1 00:09:38.796 }, 00:09:38.796 { 00:09:38.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.796 "dma_device_type": 2 00:09:38.796 } 00:09:38.796 ], 00:09:38.796 "driver_specific": {} 00:09:38.796 } 00:09:38.796 ] 00:09:38.796 21:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:38.796 21:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:38.796 21:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:38.796 21:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:38.796 21:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:38.796 21:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:38.796 21:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:38.796 21:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:38.796 21:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:38.796 21:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:38.796 21:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:38.796 21:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:38.796 21:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:39.054 21:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:39.054 "name": "Existed_Raid", 00:09:39.054 "uuid": "440c9607-4225-11ef-aa83-81fbc7dfef58", 00:09:39.054 "strip_size_kb": 64, 00:09:39.054 "state": "configuring", 00:09:39.054 "raid_level": "raid0", 00:09:39.054 "superblock": true, 00:09:39.054 "num_base_bdevs": 3, 00:09:39.054 "num_base_bdevs_discovered": 1, 00:09:39.054 "num_base_bdevs_operational": 3, 00:09:39.054 "base_bdevs_list": [ 00:09:39.054 { 00:09:39.054 "name": "BaseBdev1", 00:09:39.054 "uuid": "442cef1e-4225-11ef-aa83-81fbc7dfef58", 00:09:39.054 "is_configured": true, 00:09:39.054 "data_offset": 2048, 00:09:39.054 "data_size": 63488 00:09:39.054 }, 00:09:39.054 { 00:09:39.054 "name": "BaseBdev2", 00:09:39.054 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.054 "is_configured": false, 00:09:39.054 "data_offset": 0, 00:09:39.054 "data_size": 0 00:09:39.055 }, 00:09:39.055 { 00:09:39.055 "name": "BaseBdev3", 00:09:39.055 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:39.055 "is_configured": false, 00:09:39.055 "data_offset": 0, 00:09:39.055 "data_size": 0 00:09:39.055 } 00:09:39.055 ] 00:09:39.055 }' 00:09:39.055 21:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:39.055 21:08:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:39.313 21:08:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:39.575 [2024-07-14 21:08:51.027357] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:39.575 [2024-07-14 21:08:51.027412] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3283b6c34500 name Existed_Raid, state configuring 00:09:39.575 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:39.834 [2024-07-14 21:08:51.275437] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:39.834 [2024-07-14 21:08:51.276707] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:09:39.834 [2024-07-14 21:08:51.276754] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:09:39.834 [2024-07-14 21:08:51.276759] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:09:39.834 [2024-07-14 21:08:51.276767] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:09:39.834 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:09:39.834 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:39.834 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:39.834 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:39.835 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:39.835 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:39.835 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:39.835 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:39.835 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:39.835 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:39.835 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:39.835 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:39.835 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:39.835 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:40.094 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:40.094 "name": "Existed_Raid", 00:09:40.094 "uuid": "45207a54-4225-11ef-aa83-81fbc7dfef58", 00:09:40.094 "strip_size_kb": 64, 00:09:40.095 "state": "configuring", 00:09:40.095 "raid_level": "raid0", 00:09:40.095 "superblock": true, 00:09:40.095 "num_base_bdevs": 3, 00:09:40.095 "num_base_bdevs_discovered": 1, 00:09:40.095 "num_base_bdevs_operational": 3, 00:09:40.095 "base_bdevs_list": [ 00:09:40.095 { 00:09:40.095 "name": "BaseBdev1", 00:09:40.095 "uuid": "442cef1e-4225-11ef-aa83-81fbc7dfef58", 00:09:40.095 "is_configured": true, 00:09:40.095 "data_offset": 2048, 00:09:40.095 "data_size": 63488 00:09:40.095 }, 00:09:40.095 { 00:09:40.095 "name": "BaseBdev2", 00:09:40.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.095 "is_configured": false, 00:09:40.095 "data_offset": 0, 00:09:40.095 "data_size": 0 00:09:40.095 }, 00:09:40.095 { 00:09:40.095 "name": "BaseBdev3", 00:09:40.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:40.095 "is_configured": false, 00:09:40.095 "data_offset": 0, 00:09:40.095 "data_size": 0 00:09:40.095 } 00:09:40.095 ] 00:09:40.095 }' 00:09:40.095 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:40.095 21:08:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:40.353 21:08:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:40.612 [2024-07-14 21:08:52.083795] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:40.612 BaseBdev2 00:09:40.612 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:09:40.612 21:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:40.612 21:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:40.612 21:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:40.612 21:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:40.612 21:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:40.612 21:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:40.871 21:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:41.130 [ 00:09:41.130 { 00:09:41.130 "name": "BaseBdev2", 00:09:41.130 "aliases": [ 00:09:41.130 "459bcba2-4225-11ef-aa83-81fbc7dfef58" 00:09:41.130 ], 00:09:41.130 "product_name": "Malloc disk", 00:09:41.130 "block_size": 512, 00:09:41.130 "num_blocks": 65536, 00:09:41.130 "uuid": "459bcba2-4225-11ef-aa83-81fbc7dfef58", 00:09:41.130 "assigned_rate_limits": { 00:09:41.130 "rw_ios_per_sec": 0, 00:09:41.130 "rw_mbytes_per_sec": 0, 00:09:41.130 "r_mbytes_per_sec": 0, 00:09:41.130 "w_mbytes_per_sec": 0 00:09:41.130 }, 00:09:41.130 "claimed": true, 00:09:41.130 "claim_type": "exclusive_write", 00:09:41.130 "zoned": false, 00:09:41.130 "supported_io_types": { 00:09:41.130 "read": true, 00:09:41.130 "write": true, 00:09:41.130 "unmap": true, 00:09:41.130 "flush": true, 00:09:41.130 "reset": true, 00:09:41.130 "nvme_admin": false, 00:09:41.130 "nvme_io": false, 00:09:41.130 "nvme_io_md": false, 00:09:41.130 "write_zeroes": true, 00:09:41.130 "zcopy": true, 00:09:41.130 "get_zone_info": false, 00:09:41.130 "zone_management": false, 00:09:41.130 "zone_append": false, 00:09:41.130 "compare": false, 00:09:41.130 "compare_and_write": false, 00:09:41.130 "abort": true, 00:09:41.130 "seek_hole": false, 00:09:41.130 "seek_data": false, 00:09:41.130 "copy": true, 00:09:41.130 "nvme_iov_md": false 00:09:41.130 }, 00:09:41.130 "memory_domains": [ 00:09:41.130 { 00:09:41.130 "dma_device_id": "system", 00:09:41.130 "dma_device_type": 1 00:09:41.130 }, 00:09:41.130 { 00:09:41.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:41.130 "dma_device_type": 2 00:09:41.130 } 00:09:41.130 ], 00:09:41.130 "driver_specific": {} 00:09:41.130 } 00:09:41.130 ] 00:09:41.130 21:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:41.130 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:41.130 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:41.130 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:41.130 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:41.130 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:41.130 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:41.130 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:41.130 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:41.130 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:41.130 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:41.130 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:41.130 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:41.130 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:41.130 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:41.389 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:41.389 "name": "Existed_Raid", 00:09:41.389 "uuid": "45207a54-4225-11ef-aa83-81fbc7dfef58", 00:09:41.389 "strip_size_kb": 64, 00:09:41.389 "state": "configuring", 00:09:41.389 "raid_level": "raid0", 00:09:41.389 "superblock": true, 00:09:41.389 "num_base_bdevs": 3, 00:09:41.389 "num_base_bdevs_discovered": 2, 00:09:41.389 "num_base_bdevs_operational": 3, 00:09:41.389 "base_bdevs_list": [ 00:09:41.389 { 00:09:41.389 "name": "BaseBdev1", 00:09:41.389 "uuid": "442cef1e-4225-11ef-aa83-81fbc7dfef58", 00:09:41.389 "is_configured": true, 00:09:41.389 "data_offset": 2048, 00:09:41.389 "data_size": 63488 00:09:41.389 }, 00:09:41.389 { 00:09:41.389 "name": "BaseBdev2", 00:09:41.389 "uuid": "459bcba2-4225-11ef-aa83-81fbc7dfef58", 00:09:41.389 "is_configured": true, 00:09:41.389 "data_offset": 2048, 00:09:41.389 "data_size": 63488 00:09:41.389 }, 00:09:41.389 { 00:09:41.389 "name": "BaseBdev3", 00:09:41.389 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:41.389 "is_configured": false, 00:09:41.389 "data_offset": 0, 00:09:41.389 "data_size": 0 00:09:41.389 } 00:09:41.389 ] 00:09:41.389 }' 00:09:41.389 21:08:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:41.389 21:08:52 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:41.648 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:41.906 [2024-07-14 21:08:53.331880] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:41.906 [2024-07-14 21:08:53.331951] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3283b6c34a00 00:09:41.906 [2024-07-14 21:08:53.331958] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:41.906 [2024-07-14 21:08:53.331977] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3283b6c97e20 00:09:41.906 [2024-07-14 21:08:53.332036] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3283b6c34a00 00:09:41.906 [2024-07-14 21:08:53.332040] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3283b6c34a00 00:09:41.906 [2024-07-14 21:08:53.332078] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:41.906 BaseBdev3 00:09:41.906 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:09:41.906 21:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:41.906 21:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:41.906 21:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:41.906 21:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:41.906 21:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:41.906 21:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:42.164 21:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:42.423 [ 00:09:42.423 { 00:09:42.423 "name": "BaseBdev3", 00:09:42.423 "aliases": [ 00:09:42.423 "465a3dac-4225-11ef-aa83-81fbc7dfef58" 00:09:42.423 ], 00:09:42.423 "product_name": "Malloc disk", 00:09:42.423 "block_size": 512, 00:09:42.423 "num_blocks": 65536, 00:09:42.423 "uuid": "465a3dac-4225-11ef-aa83-81fbc7dfef58", 00:09:42.423 "assigned_rate_limits": { 00:09:42.423 "rw_ios_per_sec": 0, 00:09:42.423 "rw_mbytes_per_sec": 0, 00:09:42.423 "r_mbytes_per_sec": 0, 00:09:42.423 "w_mbytes_per_sec": 0 00:09:42.423 }, 00:09:42.423 "claimed": true, 00:09:42.423 "claim_type": "exclusive_write", 00:09:42.423 "zoned": false, 00:09:42.423 "supported_io_types": { 00:09:42.423 "read": true, 00:09:42.423 "write": true, 00:09:42.423 "unmap": true, 00:09:42.423 "flush": true, 00:09:42.423 "reset": true, 00:09:42.423 "nvme_admin": false, 00:09:42.423 "nvme_io": false, 00:09:42.423 "nvme_io_md": false, 00:09:42.423 "write_zeroes": true, 00:09:42.423 "zcopy": true, 00:09:42.423 "get_zone_info": false, 00:09:42.423 "zone_management": false, 00:09:42.423 "zone_append": false, 00:09:42.423 "compare": false, 00:09:42.423 "compare_and_write": false, 00:09:42.423 "abort": true, 00:09:42.423 "seek_hole": false, 00:09:42.423 "seek_data": false, 00:09:42.423 "copy": true, 00:09:42.423 "nvme_iov_md": false 00:09:42.423 }, 00:09:42.423 "memory_domains": [ 00:09:42.423 { 00:09:42.423 "dma_device_id": "system", 00:09:42.423 "dma_device_type": 1 00:09:42.423 }, 00:09:42.423 { 00:09:42.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:42.423 "dma_device_type": 2 00:09:42.423 } 00:09:42.423 ], 00:09:42.423 "driver_specific": {} 00:09:42.423 } 00:09:42.423 ] 00:09:42.423 21:08:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:42.423 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:09:42.423 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:09:42.423 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:42.423 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:42.423 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:42.423 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:42.423 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:42.423 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:42.423 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:42.423 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:42.423 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:42.423 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:42.423 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:42.423 21:08:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:42.681 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:42.681 "name": "Existed_Raid", 00:09:42.681 "uuid": "45207a54-4225-11ef-aa83-81fbc7dfef58", 00:09:42.681 "strip_size_kb": 64, 00:09:42.681 "state": "online", 00:09:42.681 "raid_level": "raid0", 00:09:42.681 "superblock": true, 00:09:42.681 "num_base_bdevs": 3, 00:09:42.681 "num_base_bdevs_discovered": 3, 00:09:42.681 "num_base_bdevs_operational": 3, 00:09:42.681 "base_bdevs_list": [ 00:09:42.681 { 00:09:42.681 "name": "BaseBdev1", 00:09:42.681 "uuid": "442cef1e-4225-11ef-aa83-81fbc7dfef58", 00:09:42.681 "is_configured": true, 00:09:42.681 "data_offset": 2048, 00:09:42.681 "data_size": 63488 00:09:42.681 }, 00:09:42.681 { 00:09:42.681 "name": "BaseBdev2", 00:09:42.681 "uuid": "459bcba2-4225-11ef-aa83-81fbc7dfef58", 00:09:42.681 "is_configured": true, 00:09:42.681 "data_offset": 2048, 00:09:42.681 "data_size": 63488 00:09:42.681 }, 00:09:42.681 { 00:09:42.681 "name": "BaseBdev3", 00:09:42.681 "uuid": "465a3dac-4225-11ef-aa83-81fbc7dfef58", 00:09:42.681 "is_configured": true, 00:09:42.681 "data_offset": 2048, 00:09:42.681 "data_size": 63488 00:09:42.681 } 00:09:42.681 ] 00:09:42.681 }' 00:09:42.681 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:42.681 21:08:54 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:42.938 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:09:42.938 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:42.938 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:42.938 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:42.938 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:42.938 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:42.938 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:42.938 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:43.196 [2024-07-14 21:08:54.563886] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:43.196 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:43.196 "name": "Existed_Raid", 00:09:43.196 "aliases": [ 00:09:43.196 "45207a54-4225-11ef-aa83-81fbc7dfef58" 00:09:43.196 ], 00:09:43.196 "product_name": "Raid Volume", 00:09:43.196 "block_size": 512, 00:09:43.196 "num_blocks": 190464, 00:09:43.196 "uuid": "45207a54-4225-11ef-aa83-81fbc7dfef58", 00:09:43.196 "assigned_rate_limits": { 00:09:43.196 "rw_ios_per_sec": 0, 00:09:43.196 "rw_mbytes_per_sec": 0, 00:09:43.196 "r_mbytes_per_sec": 0, 00:09:43.196 "w_mbytes_per_sec": 0 00:09:43.196 }, 00:09:43.196 "claimed": false, 00:09:43.196 "zoned": false, 00:09:43.196 "supported_io_types": { 00:09:43.196 "read": true, 00:09:43.196 "write": true, 00:09:43.196 "unmap": true, 00:09:43.196 "flush": true, 00:09:43.196 "reset": true, 00:09:43.196 "nvme_admin": false, 00:09:43.196 "nvme_io": false, 00:09:43.196 "nvme_io_md": false, 00:09:43.196 "write_zeroes": true, 00:09:43.196 "zcopy": false, 00:09:43.196 "get_zone_info": false, 00:09:43.196 "zone_management": false, 00:09:43.196 "zone_append": false, 00:09:43.196 "compare": false, 00:09:43.196 "compare_and_write": false, 00:09:43.196 "abort": false, 00:09:43.196 "seek_hole": false, 00:09:43.196 "seek_data": false, 00:09:43.196 "copy": false, 00:09:43.196 "nvme_iov_md": false 00:09:43.196 }, 00:09:43.196 "memory_domains": [ 00:09:43.196 { 00:09:43.196 "dma_device_id": "system", 00:09:43.196 "dma_device_type": 1 00:09:43.196 }, 00:09:43.196 { 00:09:43.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.196 "dma_device_type": 2 00:09:43.196 }, 00:09:43.196 { 00:09:43.196 "dma_device_id": "system", 00:09:43.196 "dma_device_type": 1 00:09:43.196 }, 00:09:43.196 { 00:09:43.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.196 "dma_device_type": 2 00:09:43.196 }, 00:09:43.196 { 00:09:43.196 "dma_device_id": "system", 00:09:43.196 "dma_device_type": 1 00:09:43.196 }, 00:09:43.196 { 00:09:43.196 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.196 "dma_device_type": 2 00:09:43.196 } 00:09:43.196 ], 00:09:43.196 "driver_specific": { 00:09:43.196 "raid": { 00:09:43.196 "uuid": "45207a54-4225-11ef-aa83-81fbc7dfef58", 00:09:43.196 "strip_size_kb": 64, 00:09:43.196 "state": "online", 00:09:43.196 "raid_level": "raid0", 00:09:43.196 "superblock": true, 00:09:43.196 "num_base_bdevs": 3, 00:09:43.196 "num_base_bdevs_discovered": 3, 00:09:43.196 "num_base_bdevs_operational": 3, 00:09:43.196 "base_bdevs_list": [ 00:09:43.196 { 00:09:43.196 "name": "BaseBdev1", 00:09:43.197 "uuid": "442cef1e-4225-11ef-aa83-81fbc7dfef58", 00:09:43.197 "is_configured": true, 00:09:43.197 "data_offset": 2048, 00:09:43.197 "data_size": 63488 00:09:43.197 }, 00:09:43.197 { 00:09:43.197 "name": "BaseBdev2", 00:09:43.197 "uuid": "459bcba2-4225-11ef-aa83-81fbc7dfef58", 00:09:43.197 "is_configured": true, 00:09:43.197 "data_offset": 2048, 00:09:43.197 "data_size": 63488 00:09:43.197 }, 00:09:43.197 { 00:09:43.197 "name": "BaseBdev3", 00:09:43.197 "uuid": "465a3dac-4225-11ef-aa83-81fbc7dfef58", 00:09:43.197 "is_configured": true, 00:09:43.197 "data_offset": 2048, 00:09:43.197 "data_size": 63488 00:09:43.197 } 00:09:43.197 ] 00:09:43.197 } 00:09:43.197 } 00:09:43.197 }' 00:09:43.197 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:43.197 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:09:43.197 BaseBdev2 00:09:43.197 BaseBdev3' 00:09:43.197 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:43.197 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:09:43.197 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:43.455 "name": "BaseBdev1", 00:09:43.455 "aliases": [ 00:09:43.455 "442cef1e-4225-11ef-aa83-81fbc7dfef58" 00:09:43.455 ], 00:09:43.455 "product_name": "Malloc disk", 00:09:43.455 "block_size": 512, 00:09:43.455 "num_blocks": 65536, 00:09:43.455 "uuid": "442cef1e-4225-11ef-aa83-81fbc7dfef58", 00:09:43.455 "assigned_rate_limits": { 00:09:43.455 "rw_ios_per_sec": 0, 00:09:43.455 "rw_mbytes_per_sec": 0, 00:09:43.455 "r_mbytes_per_sec": 0, 00:09:43.455 "w_mbytes_per_sec": 0 00:09:43.455 }, 00:09:43.455 "claimed": true, 00:09:43.455 "claim_type": "exclusive_write", 00:09:43.455 "zoned": false, 00:09:43.455 "supported_io_types": { 00:09:43.455 "read": true, 00:09:43.455 "write": true, 00:09:43.455 "unmap": true, 00:09:43.455 "flush": true, 00:09:43.455 "reset": true, 00:09:43.455 "nvme_admin": false, 00:09:43.455 "nvme_io": false, 00:09:43.455 "nvme_io_md": false, 00:09:43.455 "write_zeroes": true, 00:09:43.455 "zcopy": true, 00:09:43.455 "get_zone_info": false, 00:09:43.455 "zone_management": false, 00:09:43.455 "zone_append": false, 00:09:43.455 "compare": false, 00:09:43.455 "compare_and_write": false, 00:09:43.455 "abort": true, 00:09:43.455 "seek_hole": false, 00:09:43.455 "seek_data": false, 00:09:43.455 "copy": true, 00:09:43.455 "nvme_iov_md": false 00:09:43.455 }, 00:09:43.455 "memory_domains": [ 00:09:43.455 { 00:09:43.455 "dma_device_id": "system", 00:09:43.455 "dma_device_type": 1 00:09:43.455 }, 00:09:43.455 { 00:09:43.455 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.455 "dma_device_type": 2 00:09:43.455 } 00:09:43.455 ], 00:09:43.455 "driver_specific": {} 00:09:43.455 }' 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:43.455 21:08:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:43.714 "name": "BaseBdev2", 00:09:43.714 "aliases": [ 00:09:43.714 "459bcba2-4225-11ef-aa83-81fbc7dfef58" 00:09:43.714 ], 00:09:43.714 "product_name": "Malloc disk", 00:09:43.714 "block_size": 512, 00:09:43.714 "num_blocks": 65536, 00:09:43.714 "uuid": "459bcba2-4225-11ef-aa83-81fbc7dfef58", 00:09:43.714 "assigned_rate_limits": { 00:09:43.714 "rw_ios_per_sec": 0, 00:09:43.714 "rw_mbytes_per_sec": 0, 00:09:43.714 "r_mbytes_per_sec": 0, 00:09:43.714 "w_mbytes_per_sec": 0 00:09:43.714 }, 00:09:43.714 "claimed": true, 00:09:43.714 "claim_type": "exclusive_write", 00:09:43.714 "zoned": false, 00:09:43.714 "supported_io_types": { 00:09:43.714 "read": true, 00:09:43.714 "write": true, 00:09:43.714 "unmap": true, 00:09:43.714 "flush": true, 00:09:43.714 "reset": true, 00:09:43.714 "nvme_admin": false, 00:09:43.714 "nvme_io": false, 00:09:43.714 "nvme_io_md": false, 00:09:43.714 "write_zeroes": true, 00:09:43.714 "zcopy": true, 00:09:43.714 "get_zone_info": false, 00:09:43.714 "zone_management": false, 00:09:43.714 "zone_append": false, 00:09:43.714 "compare": false, 00:09:43.714 "compare_and_write": false, 00:09:43.714 "abort": true, 00:09:43.714 "seek_hole": false, 00:09:43.714 "seek_data": false, 00:09:43.714 "copy": true, 00:09:43.714 "nvme_iov_md": false 00:09:43.714 }, 00:09:43.714 "memory_domains": [ 00:09:43.714 { 00:09:43.714 "dma_device_id": "system", 00:09:43.714 "dma_device_type": 1 00:09:43.714 }, 00:09:43.714 { 00:09:43.714 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.714 "dma_device_type": 2 00:09:43.714 } 00:09:43.714 ], 00:09:43.714 "driver_specific": {} 00:09:43.714 }' 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:43.714 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:43.972 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:43.972 "name": "BaseBdev3", 00:09:43.972 "aliases": [ 00:09:43.972 "465a3dac-4225-11ef-aa83-81fbc7dfef58" 00:09:43.972 ], 00:09:43.972 "product_name": "Malloc disk", 00:09:43.972 "block_size": 512, 00:09:43.972 "num_blocks": 65536, 00:09:43.972 "uuid": "465a3dac-4225-11ef-aa83-81fbc7dfef58", 00:09:43.972 "assigned_rate_limits": { 00:09:43.972 "rw_ios_per_sec": 0, 00:09:43.972 "rw_mbytes_per_sec": 0, 00:09:43.972 "r_mbytes_per_sec": 0, 00:09:43.972 "w_mbytes_per_sec": 0 00:09:43.972 }, 00:09:43.972 "claimed": true, 00:09:43.972 "claim_type": "exclusive_write", 00:09:43.972 "zoned": false, 00:09:43.972 "supported_io_types": { 00:09:43.972 "read": true, 00:09:43.972 "write": true, 00:09:43.972 "unmap": true, 00:09:43.972 "flush": true, 00:09:43.972 "reset": true, 00:09:43.972 "nvme_admin": false, 00:09:43.972 "nvme_io": false, 00:09:43.972 "nvme_io_md": false, 00:09:43.972 "write_zeroes": true, 00:09:43.972 "zcopy": true, 00:09:43.972 "get_zone_info": false, 00:09:43.972 "zone_management": false, 00:09:43.972 "zone_append": false, 00:09:43.972 "compare": false, 00:09:43.972 "compare_and_write": false, 00:09:43.972 "abort": true, 00:09:43.972 "seek_hole": false, 00:09:43.972 "seek_data": false, 00:09:43.972 "copy": true, 00:09:43.972 "nvme_iov_md": false 00:09:43.973 }, 00:09:43.973 "memory_domains": [ 00:09:43.973 { 00:09:43.973 "dma_device_id": "system", 00:09:43.973 "dma_device_type": 1 00:09:43.973 }, 00:09:43.973 { 00:09:43.973 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:43.973 "dma_device_type": 2 00:09:43.973 } 00:09:43.973 ], 00:09:43.973 "driver_specific": {} 00:09:43.973 }' 00:09:43.973 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:44.231 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:44.232 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:44.232 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:44.232 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:44.232 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:44.232 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:44.232 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:44.232 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:44.232 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:44.232 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:44.232 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:44.232 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:44.491 [2024-07-14 21:08:55.787895] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:44.491 [2024-07-14 21:08:55.787940] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:44.491 [2024-07-14 21:08:55.787968] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 2 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:44.491 21:08:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:44.749 21:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:44.749 "name": "Existed_Raid", 00:09:44.749 "uuid": "45207a54-4225-11ef-aa83-81fbc7dfef58", 00:09:44.749 "strip_size_kb": 64, 00:09:44.749 "state": "offline", 00:09:44.749 "raid_level": "raid0", 00:09:44.749 "superblock": true, 00:09:44.749 "num_base_bdevs": 3, 00:09:44.749 "num_base_bdevs_discovered": 2, 00:09:44.749 "num_base_bdevs_operational": 2, 00:09:44.749 "base_bdevs_list": [ 00:09:44.749 { 00:09:44.749 "name": null, 00:09:44.749 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:44.749 "is_configured": false, 00:09:44.749 "data_offset": 2048, 00:09:44.749 "data_size": 63488 00:09:44.749 }, 00:09:44.749 { 00:09:44.749 "name": "BaseBdev2", 00:09:44.749 "uuid": "459bcba2-4225-11ef-aa83-81fbc7dfef58", 00:09:44.749 "is_configured": true, 00:09:44.749 "data_offset": 2048, 00:09:44.749 "data_size": 63488 00:09:44.749 }, 00:09:44.749 { 00:09:44.749 "name": "BaseBdev3", 00:09:44.749 "uuid": "465a3dac-4225-11ef-aa83-81fbc7dfef58", 00:09:44.749 "is_configured": true, 00:09:44.749 "data_offset": 2048, 00:09:44.749 "data_size": 63488 00:09:44.749 } 00:09:44.749 ] 00:09:44.749 }' 00:09:44.749 21:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:44.749 21:08:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:45.008 21:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:09:45.008 21:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:45.008 21:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.008 21:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:45.266 21:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:45.266 21:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:45.266 21:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:09:45.525 [2024-07-14 21:08:56.880843] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:45.525 21:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:45.525 21:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:45.525 21:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:45.525 21:08:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:09:45.784 21:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:09:45.784 21:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:09:45.784 21:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:09:45.784 [2024-07-14 21:08:57.309621] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:45.784 [2024-07-14 21:08:57.309674] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3283b6c34a00 name Existed_Raid, state offline 00:09:45.784 21:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:09:45.784 21:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:09:46.042 21:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:46.042 21:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:09:46.042 21:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:09:46.042 21:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:09:46.042 21:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:09:46.042 21:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:09:46.042 21:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:46.042 21:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:09:46.300 BaseBdev2 00:09:46.300 21:08:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:09:46.300 21:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:09:46.300 21:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:46.300 21:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:46.300 21:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:46.300 21:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:46.300 21:08:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:46.558 21:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:09:46.816 [ 00:09:46.816 { 00:09:46.816 "name": "BaseBdev2", 00:09:46.816 "aliases": [ 00:09:46.816 "48fb3be7-4225-11ef-aa83-81fbc7dfef58" 00:09:46.816 ], 00:09:46.816 "product_name": "Malloc disk", 00:09:46.816 "block_size": 512, 00:09:46.816 "num_blocks": 65536, 00:09:46.816 "uuid": "48fb3be7-4225-11ef-aa83-81fbc7dfef58", 00:09:46.816 "assigned_rate_limits": { 00:09:46.816 "rw_ios_per_sec": 0, 00:09:46.816 "rw_mbytes_per_sec": 0, 00:09:46.816 "r_mbytes_per_sec": 0, 00:09:46.816 "w_mbytes_per_sec": 0 00:09:46.816 }, 00:09:46.816 "claimed": false, 00:09:46.816 "zoned": false, 00:09:46.816 "supported_io_types": { 00:09:46.816 "read": true, 00:09:46.816 "write": true, 00:09:46.816 "unmap": true, 00:09:46.816 "flush": true, 00:09:46.816 "reset": true, 00:09:46.816 "nvme_admin": false, 00:09:46.816 "nvme_io": false, 00:09:46.816 "nvme_io_md": false, 00:09:46.816 "write_zeroes": true, 00:09:46.816 "zcopy": true, 00:09:46.816 "get_zone_info": false, 00:09:46.816 "zone_management": false, 00:09:46.816 "zone_append": false, 00:09:46.816 "compare": false, 00:09:46.816 "compare_and_write": false, 00:09:46.816 "abort": true, 00:09:46.816 "seek_hole": false, 00:09:46.816 "seek_data": false, 00:09:46.816 "copy": true, 00:09:46.816 "nvme_iov_md": false 00:09:46.816 }, 00:09:46.816 "memory_domains": [ 00:09:46.816 { 00:09:46.816 "dma_device_id": "system", 00:09:46.816 "dma_device_type": 1 00:09:46.816 }, 00:09:46.816 { 00:09:46.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:46.816 "dma_device_type": 2 00:09:46.816 } 00:09:46.816 ], 00:09:46.816 "driver_specific": {} 00:09:46.816 } 00:09:46.816 ] 00:09:46.816 21:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:46.816 21:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:46.816 21:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:46.816 21:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:09:47.075 BaseBdev3 00:09:47.075 21:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:09:47.075 21:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:09:47.075 21:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:47.075 21:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:47.075 21:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:47.075 21:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:47.075 21:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:47.333 21:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:09:47.591 [ 00:09:47.591 { 00:09:47.591 "name": "BaseBdev3", 00:09:47.591 "aliases": [ 00:09:47.591 "4976874b-4225-11ef-aa83-81fbc7dfef58" 00:09:47.591 ], 00:09:47.591 "product_name": "Malloc disk", 00:09:47.591 "block_size": 512, 00:09:47.591 "num_blocks": 65536, 00:09:47.591 "uuid": "4976874b-4225-11ef-aa83-81fbc7dfef58", 00:09:47.591 "assigned_rate_limits": { 00:09:47.591 "rw_ios_per_sec": 0, 00:09:47.591 "rw_mbytes_per_sec": 0, 00:09:47.591 "r_mbytes_per_sec": 0, 00:09:47.591 "w_mbytes_per_sec": 0 00:09:47.591 }, 00:09:47.591 "claimed": false, 00:09:47.591 "zoned": false, 00:09:47.591 "supported_io_types": { 00:09:47.591 "read": true, 00:09:47.591 "write": true, 00:09:47.591 "unmap": true, 00:09:47.591 "flush": true, 00:09:47.591 "reset": true, 00:09:47.591 "nvme_admin": false, 00:09:47.591 "nvme_io": false, 00:09:47.591 "nvme_io_md": false, 00:09:47.591 "write_zeroes": true, 00:09:47.591 "zcopy": true, 00:09:47.591 "get_zone_info": false, 00:09:47.591 "zone_management": false, 00:09:47.591 "zone_append": false, 00:09:47.591 "compare": false, 00:09:47.591 "compare_and_write": false, 00:09:47.591 "abort": true, 00:09:47.591 "seek_hole": false, 00:09:47.591 "seek_data": false, 00:09:47.591 "copy": true, 00:09:47.591 "nvme_iov_md": false 00:09:47.591 }, 00:09:47.591 "memory_domains": [ 00:09:47.591 { 00:09:47.591 "dma_device_id": "system", 00:09:47.591 "dma_device_type": 1 00:09:47.591 }, 00:09:47.591 { 00:09:47.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.591 "dma_device_type": 2 00:09:47.591 } 00:09:47.591 ], 00:09:47.591 "driver_specific": {} 00:09:47.591 } 00:09:47.591 ] 00:09:47.591 21:08:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:47.591 21:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:09:47.591 21:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:09:47.591 21:08:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:09:47.861 [2024-07-14 21:08:59.194280] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:09:47.861 [2024-07-14 21:08:59.194358] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:09:47.861 [2024-07-14 21:08:59.194379] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:47.861 [2024-07-14 21:08:59.195117] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:47.861 21:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:47.861 21:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:47.861 21:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:47.861 21:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:47.861 21:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:47.861 21:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:47.861 21:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:47.861 21:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:47.862 21:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:47.862 21:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:47.862 21:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:47.862 21:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.157 21:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:48.157 "name": "Existed_Raid", 00:09:48.157 "uuid": "49d8cc37-4225-11ef-aa83-81fbc7dfef58", 00:09:48.157 "strip_size_kb": 64, 00:09:48.157 "state": "configuring", 00:09:48.157 "raid_level": "raid0", 00:09:48.157 "superblock": true, 00:09:48.157 "num_base_bdevs": 3, 00:09:48.157 "num_base_bdevs_discovered": 2, 00:09:48.157 "num_base_bdevs_operational": 3, 00:09:48.157 "base_bdevs_list": [ 00:09:48.157 { 00:09:48.157 "name": "BaseBdev1", 00:09:48.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.157 "is_configured": false, 00:09:48.157 "data_offset": 0, 00:09:48.157 "data_size": 0 00:09:48.157 }, 00:09:48.157 { 00:09:48.157 "name": "BaseBdev2", 00:09:48.157 "uuid": "48fb3be7-4225-11ef-aa83-81fbc7dfef58", 00:09:48.157 "is_configured": true, 00:09:48.157 "data_offset": 2048, 00:09:48.157 "data_size": 63488 00:09:48.157 }, 00:09:48.157 { 00:09:48.157 "name": "BaseBdev3", 00:09:48.157 "uuid": "4976874b-4225-11ef-aa83-81fbc7dfef58", 00:09:48.157 "is_configured": true, 00:09:48.157 "data_offset": 2048, 00:09:48.157 "data_size": 63488 00:09:48.157 } 00:09:48.157 ] 00:09:48.157 }' 00:09:48.157 21:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:48.157 21:08:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:48.415 21:08:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:09:48.674 [2024-07-14 21:09:00.038297] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:09:48.674 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:48.674 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:48.674 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:48.674 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:48.674 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:48.674 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:48.674 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:48.674 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:48.674 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:48.674 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:48.674 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:48.674 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:48.932 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:48.932 "name": "Existed_Raid", 00:09:48.932 "uuid": "49d8cc37-4225-11ef-aa83-81fbc7dfef58", 00:09:48.932 "strip_size_kb": 64, 00:09:48.932 "state": "configuring", 00:09:48.932 "raid_level": "raid0", 00:09:48.932 "superblock": true, 00:09:48.932 "num_base_bdevs": 3, 00:09:48.932 "num_base_bdevs_discovered": 1, 00:09:48.932 "num_base_bdevs_operational": 3, 00:09:48.932 "base_bdevs_list": [ 00:09:48.932 { 00:09:48.932 "name": "BaseBdev1", 00:09:48.932 "uuid": "00000000-0000-0000-0000-000000000000", 00:09:48.932 "is_configured": false, 00:09:48.932 "data_offset": 0, 00:09:48.932 "data_size": 0 00:09:48.932 }, 00:09:48.932 { 00:09:48.932 "name": null, 00:09:48.932 "uuid": "48fb3be7-4225-11ef-aa83-81fbc7dfef58", 00:09:48.932 "is_configured": false, 00:09:48.932 "data_offset": 2048, 00:09:48.932 "data_size": 63488 00:09:48.932 }, 00:09:48.932 { 00:09:48.932 "name": "BaseBdev3", 00:09:48.932 "uuid": "4976874b-4225-11ef-aa83-81fbc7dfef58", 00:09:48.932 "is_configured": true, 00:09:48.932 "data_offset": 2048, 00:09:48.932 "data_size": 63488 00:09:48.932 } 00:09:48.932 ] 00:09:48.932 }' 00:09:48.932 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:48.932 21:09:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:49.190 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:49.190 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:49.448 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:09:49.448 21:09:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:09:49.705 [2024-07-14 21:09:01.226579] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:09:49.705 BaseBdev1 00:09:49.705 21:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:09:49.705 21:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:09:49.705 21:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:49.706 21:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:49.706 21:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:49.706 21:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:49.706 21:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:50.271 21:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:09:50.529 [ 00:09:50.529 { 00:09:50.529 "name": "BaseBdev1", 00:09:50.529 "aliases": [ 00:09:50.529 "4b0edf8f-4225-11ef-aa83-81fbc7dfef58" 00:09:50.529 ], 00:09:50.529 "product_name": "Malloc disk", 00:09:50.529 "block_size": 512, 00:09:50.529 "num_blocks": 65536, 00:09:50.529 "uuid": "4b0edf8f-4225-11ef-aa83-81fbc7dfef58", 00:09:50.529 "assigned_rate_limits": { 00:09:50.529 "rw_ios_per_sec": 0, 00:09:50.529 "rw_mbytes_per_sec": 0, 00:09:50.529 "r_mbytes_per_sec": 0, 00:09:50.529 "w_mbytes_per_sec": 0 00:09:50.529 }, 00:09:50.529 "claimed": true, 00:09:50.529 "claim_type": "exclusive_write", 00:09:50.529 "zoned": false, 00:09:50.529 "supported_io_types": { 00:09:50.529 "read": true, 00:09:50.529 "write": true, 00:09:50.529 "unmap": true, 00:09:50.529 "flush": true, 00:09:50.529 "reset": true, 00:09:50.529 "nvme_admin": false, 00:09:50.529 "nvme_io": false, 00:09:50.529 "nvme_io_md": false, 00:09:50.529 "write_zeroes": true, 00:09:50.529 "zcopy": true, 00:09:50.529 "get_zone_info": false, 00:09:50.529 "zone_management": false, 00:09:50.529 "zone_append": false, 00:09:50.529 "compare": false, 00:09:50.529 "compare_and_write": false, 00:09:50.529 "abort": true, 00:09:50.529 "seek_hole": false, 00:09:50.529 "seek_data": false, 00:09:50.529 "copy": true, 00:09:50.529 "nvme_iov_md": false 00:09:50.529 }, 00:09:50.529 "memory_domains": [ 00:09:50.529 { 00:09:50.529 "dma_device_id": "system", 00:09:50.529 "dma_device_type": 1 00:09:50.529 }, 00:09:50.529 { 00:09:50.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:50.529 "dma_device_type": 2 00:09:50.529 } 00:09:50.529 ], 00:09:50.529 "driver_specific": {} 00:09:50.529 } 00:09:50.529 ] 00:09:50.529 21:09:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:50.529 21:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:50.529 21:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:50.529 21:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:50.529 21:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:50.529 21:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:50.529 21:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:50.529 21:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:50.529 21:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:50.529 21:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:50.529 21:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:50.529 21:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:50.529 21:09:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:50.788 21:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:50.788 "name": "Existed_Raid", 00:09:50.788 "uuid": "49d8cc37-4225-11ef-aa83-81fbc7dfef58", 00:09:50.788 "strip_size_kb": 64, 00:09:50.788 "state": "configuring", 00:09:50.788 "raid_level": "raid0", 00:09:50.788 "superblock": true, 00:09:50.788 "num_base_bdevs": 3, 00:09:50.788 "num_base_bdevs_discovered": 2, 00:09:50.788 "num_base_bdevs_operational": 3, 00:09:50.788 "base_bdevs_list": [ 00:09:50.788 { 00:09:50.788 "name": "BaseBdev1", 00:09:50.788 "uuid": "4b0edf8f-4225-11ef-aa83-81fbc7dfef58", 00:09:50.788 "is_configured": true, 00:09:50.788 "data_offset": 2048, 00:09:50.788 "data_size": 63488 00:09:50.788 }, 00:09:50.788 { 00:09:50.788 "name": null, 00:09:50.788 "uuid": "48fb3be7-4225-11ef-aa83-81fbc7dfef58", 00:09:50.788 "is_configured": false, 00:09:50.788 "data_offset": 2048, 00:09:50.788 "data_size": 63488 00:09:50.788 }, 00:09:50.788 { 00:09:50.788 "name": "BaseBdev3", 00:09:50.788 "uuid": "4976874b-4225-11ef-aa83-81fbc7dfef58", 00:09:50.788 "is_configured": true, 00:09:50.788 "data_offset": 2048, 00:09:50.788 "data_size": 63488 00:09:50.788 } 00:09:50.788 ] 00:09:50.788 }' 00:09:50.788 21:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:50.788 21:09:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:51.046 21:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.046 21:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:51.303 21:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:09:51.303 21:09:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:09:51.561 [2024-07-14 21:09:03.094390] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:09:51.818 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:51.818 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:51.818 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:51.818 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:51.818 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:51.818 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:51.818 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:51.818 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:51.818 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:51.818 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:51.818 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:51.818 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:52.076 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:52.076 "name": "Existed_Raid", 00:09:52.076 "uuid": "49d8cc37-4225-11ef-aa83-81fbc7dfef58", 00:09:52.076 "strip_size_kb": 64, 00:09:52.076 "state": "configuring", 00:09:52.076 "raid_level": "raid0", 00:09:52.076 "superblock": true, 00:09:52.076 "num_base_bdevs": 3, 00:09:52.076 "num_base_bdevs_discovered": 1, 00:09:52.076 "num_base_bdevs_operational": 3, 00:09:52.076 "base_bdevs_list": [ 00:09:52.076 { 00:09:52.076 "name": "BaseBdev1", 00:09:52.076 "uuid": "4b0edf8f-4225-11ef-aa83-81fbc7dfef58", 00:09:52.076 "is_configured": true, 00:09:52.076 "data_offset": 2048, 00:09:52.076 "data_size": 63488 00:09:52.076 }, 00:09:52.076 { 00:09:52.076 "name": null, 00:09:52.076 "uuid": "48fb3be7-4225-11ef-aa83-81fbc7dfef58", 00:09:52.076 "is_configured": false, 00:09:52.076 "data_offset": 2048, 00:09:52.076 "data_size": 63488 00:09:52.076 }, 00:09:52.076 { 00:09:52.076 "name": null, 00:09:52.076 "uuid": "4976874b-4225-11ef-aa83-81fbc7dfef58", 00:09:52.076 "is_configured": false, 00:09:52.076 "data_offset": 2048, 00:09:52.076 "data_size": 63488 00:09:52.076 } 00:09:52.076 ] 00:09:52.076 }' 00:09:52.076 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:52.076 21:09:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:52.334 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:52.334 21:09:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.592 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:09:52.592 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:09:52.850 [2024-07-14 21:09:04.342427] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:09:52.850 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:52.850 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:52.850 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:52.850 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:52.850 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:52.850 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:52.850 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:52.850 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:52.850 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:52.850 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:52.850 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:52.850 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:53.416 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:53.416 "name": "Existed_Raid", 00:09:53.416 "uuid": "49d8cc37-4225-11ef-aa83-81fbc7dfef58", 00:09:53.416 "strip_size_kb": 64, 00:09:53.416 "state": "configuring", 00:09:53.416 "raid_level": "raid0", 00:09:53.416 "superblock": true, 00:09:53.416 "num_base_bdevs": 3, 00:09:53.416 "num_base_bdevs_discovered": 2, 00:09:53.416 "num_base_bdevs_operational": 3, 00:09:53.416 "base_bdevs_list": [ 00:09:53.416 { 00:09:53.416 "name": "BaseBdev1", 00:09:53.416 "uuid": "4b0edf8f-4225-11ef-aa83-81fbc7dfef58", 00:09:53.416 "is_configured": true, 00:09:53.416 "data_offset": 2048, 00:09:53.416 "data_size": 63488 00:09:53.416 }, 00:09:53.416 { 00:09:53.416 "name": null, 00:09:53.416 "uuid": "48fb3be7-4225-11ef-aa83-81fbc7dfef58", 00:09:53.416 "is_configured": false, 00:09:53.416 "data_offset": 2048, 00:09:53.416 "data_size": 63488 00:09:53.416 }, 00:09:53.416 { 00:09:53.416 "name": "BaseBdev3", 00:09:53.416 "uuid": "4976874b-4225-11ef-aa83-81fbc7dfef58", 00:09:53.416 "is_configured": true, 00:09:53.416 "data_offset": 2048, 00:09:53.416 "data_size": 63488 00:09:53.416 } 00:09:53.416 ] 00:09:53.416 }' 00:09:53.416 21:09:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:53.416 21:09:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:53.673 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:53.673 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:09:53.931 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:09:53.931 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:09:54.189 [2024-07-14 21:09:05.598424] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:09:54.189 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:54.189 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:54.189 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:54.189 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:54.189 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:54.189 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:54.190 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:54.190 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:54.190 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:54.190 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:54.190 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.190 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:54.448 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:54.448 "name": "Existed_Raid", 00:09:54.448 "uuid": "49d8cc37-4225-11ef-aa83-81fbc7dfef58", 00:09:54.448 "strip_size_kb": 64, 00:09:54.448 "state": "configuring", 00:09:54.448 "raid_level": "raid0", 00:09:54.448 "superblock": true, 00:09:54.448 "num_base_bdevs": 3, 00:09:54.448 "num_base_bdevs_discovered": 1, 00:09:54.448 "num_base_bdevs_operational": 3, 00:09:54.448 "base_bdevs_list": [ 00:09:54.448 { 00:09:54.448 "name": null, 00:09:54.448 "uuid": "4b0edf8f-4225-11ef-aa83-81fbc7dfef58", 00:09:54.448 "is_configured": false, 00:09:54.448 "data_offset": 2048, 00:09:54.448 "data_size": 63488 00:09:54.448 }, 00:09:54.448 { 00:09:54.448 "name": null, 00:09:54.448 "uuid": "48fb3be7-4225-11ef-aa83-81fbc7dfef58", 00:09:54.448 "is_configured": false, 00:09:54.448 "data_offset": 2048, 00:09:54.448 "data_size": 63488 00:09:54.448 }, 00:09:54.448 { 00:09:54.448 "name": "BaseBdev3", 00:09:54.448 "uuid": "4976874b-4225-11ef-aa83-81fbc7dfef58", 00:09:54.448 "is_configured": true, 00:09:54.448 "data_offset": 2048, 00:09:54.448 "data_size": 63488 00:09:54.448 } 00:09:54.448 ] 00:09:54.448 }' 00:09:54.448 21:09:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:54.448 21:09:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:54.707 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:54.707 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:09:54.966 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:09:54.966 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:09:55.224 [2024-07-14 21:09:06.727078] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:09:55.224 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 3 00:09:55.224 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:55.224 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:09:55.224 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:55.224 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:55.224 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:55.224 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:55.224 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:55.225 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:55.225 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:55.225 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:55.225 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:55.483 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:55.483 "name": "Existed_Raid", 00:09:55.483 "uuid": "49d8cc37-4225-11ef-aa83-81fbc7dfef58", 00:09:55.483 "strip_size_kb": 64, 00:09:55.483 "state": "configuring", 00:09:55.483 "raid_level": "raid0", 00:09:55.483 "superblock": true, 00:09:55.483 "num_base_bdevs": 3, 00:09:55.484 "num_base_bdevs_discovered": 2, 00:09:55.484 "num_base_bdevs_operational": 3, 00:09:55.484 "base_bdevs_list": [ 00:09:55.484 { 00:09:55.484 "name": null, 00:09:55.484 "uuid": "4b0edf8f-4225-11ef-aa83-81fbc7dfef58", 00:09:55.484 "is_configured": false, 00:09:55.484 "data_offset": 2048, 00:09:55.484 "data_size": 63488 00:09:55.484 }, 00:09:55.484 { 00:09:55.484 "name": "BaseBdev2", 00:09:55.484 "uuid": "48fb3be7-4225-11ef-aa83-81fbc7dfef58", 00:09:55.484 "is_configured": true, 00:09:55.484 "data_offset": 2048, 00:09:55.484 "data_size": 63488 00:09:55.484 }, 00:09:55.484 { 00:09:55.484 "name": "BaseBdev3", 00:09:55.484 "uuid": "4976874b-4225-11ef-aa83-81fbc7dfef58", 00:09:55.484 "is_configured": true, 00:09:55.484 "data_offset": 2048, 00:09:55.484 "data_size": 63488 00:09:55.484 } 00:09:55.484 ] 00:09:55.484 }' 00:09:55.484 21:09:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:55.484 21:09:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:56.051 21:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.051 21:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:09:56.051 21:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:09:56.051 21:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:56.051 21:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:09:56.309 21:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 4b0edf8f-4225-11ef-aa83-81fbc7dfef58 00:09:56.567 [2024-07-14 21:09:07.971298] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:09:56.567 [2024-07-14 21:09:07.971350] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3283b6c34a00 00:09:56.567 [2024-07-14 21:09:07.971355] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:09:56.567 [2024-07-14 21:09:07.971371] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3283b6c97e20 00:09:56.567 [2024-07-14 21:09:07.971416] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3283b6c34a00 00:09:56.567 [2024-07-14 21:09:07.971420] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3283b6c34a00 00:09:56.567 [2024-07-14 21:09:07.971440] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:09:56.567 NewBaseBdev 00:09:56.567 21:09:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:09:56.567 21:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:09:56.567 21:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:56.567 21:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:09:56.567 21:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:56.567 21:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:56.567 21:09:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:09:56.825 21:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:09:57.084 [ 00:09:57.084 { 00:09:57.084 "name": "NewBaseBdev", 00:09:57.084 "aliases": [ 00:09:57.084 "4b0edf8f-4225-11ef-aa83-81fbc7dfef58" 00:09:57.084 ], 00:09:57.084 "product_name": "Malloc disk", 00:09:57.084 "block_size": 512, 00:09:57.084 "num_blocks": 65536, 00:09:57.084 "uuid": "4b0edf8f-4225-11ef-aa83-81fbc7dfef58", 00:09:57.084 "assigned_rate_limits": { 00:09:57.084 "rw_ios_per_sec": 0, 00:09:57.084 "rw_mbytes_per_sec": 0, 00:09:57.084 "r_mbytes_per_sec": 0, 00:09:57.084 "w_mbytes_per_sec": 0 00:09:57.084 }, 00:09:57.084 "claimed": true, 00:09:57.084 "claim_type": "exclusive_write", 00:09:57.084 "zoned": false, 00:09:57.084 "supported_io_types": { 00:09:57.084 "read": true, 00:09:57.084 "write": true, 00:09:57.084 "unmap": true, 00:09:57.084 "flush": true, 00:09:57.084 "reset": true, 00:09:57.084 "nvme_admin": false, 00:09:57.084 "nvme_io": false, 00:09:57.084 "nvme_io_md": false, 00:09:57.084 "write_zeroes": true, 00:09:57.084 "zcopy": true, 00:09:57.084 "get_zone_info": false, 00:09:57.084 "zone_management": false, 00:09:57.084 "zone_append": false, 00:09:57.084 "compare": false, 00:09:57.084 "compare_and_write": false, 00:09:57.084 "abort": true, 00:09:57.084 "seek_hole": false, 00:09:57.084 "seek_data": false, 00:09:57.084 "copy": true, 00:09:57.084 "nvme_iov_md": false 00:09:57.084 }, 00:09:57.084 "memory_domains": [ 00:09:57.084 { 00:09:57.084 "dma_device_id": "system", 00:09:57.084 "dma_device_type": 1 00:09:57.084 }, 00:09:57.084 { 00:09:57.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.084 "dma_device_type": 2 00:09:57.084 } 00:09:57.084 ], 00:09:57.084 "driver_specific": {} 00:09:57.084 } 00:09:57.084 ] 00:09:57.084 21:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:09:57.084 21:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 3 00:09:57.084 21:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:09:57.084 21:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:09:57.084 21:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:09:57.084 21:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:09:57.084 21:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:09:57.084 21:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:09:57.084 21:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:09:57.084 21:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:09:57.084 21:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:09:57.084 21:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:09:57.084 21:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:09:57.342 21:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:09:57.342 "name": "Existed_Raid", 00:09:57.342 "uuid": "49d8cc37-4225-11ef-aa83-81fbc7dfef58", 00:09:57.342 "strip_size_kb": 64, 00:09:57.342 "state": "online", 00:09:57.342 "raid_level": "raid0", 00:09:57.342 "superblock": true, 00:09:57.342 "num_base_bdevs": 3, 00:09:57.342 "num_base_bdevs_discovered": 3, 00:09:57.342 "num_base_bdevs_operational": 3, 00:09:57.342 "base_bdevs_list": [ 00:09:57.342 { 00:09:57.342 "name": "NewBaseBdev", 00:09:57.342 "uuid": "4b0edf8f-4225-11ef-aa83-81fbc7dfef58", 00:09:57.342 "is_configured": true, 00:09:57.342 "data_offset": 2048, 00:09:57.342 "data_size": 63488 00:09:57.342 }, 00:09:57.342 { 00:09:57.342 "name": "BaseBdev2", 00:09:57.342 "uuid": "48fb3be7-4225-11ef-aa83-81fbc7dfef58", 00:09:57.342 "is_configured": true, 00:09:57.342 "data_offset": 2048, 00:09:57.342 "data_size": 63488 00:09:57.342 }, 00:09:57.342 { 00:09:57.342 "name": "BaseBdev3", 00:09:57.343 "uuid": "4976874b-4225-11ef-aa83-81fbc7dfef58", 00:09:57.343 "is_configured": true, 00:09:57.343 "data_offset": 2048, 00:09:57.343 "data_size": 63488 00:09:57.343 } 00:09:57.343 ] 00:09:57.343 }' 00:09:57.343 21:09:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:09:57.343 21:09:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:57.601 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:09:57.601 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:09:57.601 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:09:57.601 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:09:57.601 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:09:57.601 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:09:57.601 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:09:57.601 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:09:57.859 [2024-07-14 21:09:09.283353] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:09:57.859 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:09:57.859 "name": "Existed_Raid", 00:09:57.859 "aliases": [ 00:09:57.859 "49d8cc37-4225-11ef-aa83-81fbc7dfef58" 00:09:57.859 ], 00:09:57.859 "product_name": "Raid Volume", 00:09:57.859 "block_size": 512, 00:09:57.859 "num_blocks": 190464, 00:09:57.859 "uuid": "49d8cc37-4225-11ef-aa83-81fbc7dfef58", 00:09:57.859 "assigned_rate_limits": { 00:09:57.859 "rw_ios_per_sec": 0, 00:09:57.859 "rw_mbytes_per_sec": 0, 00:09:57.859 "r_mbytes_per_sec": 0, 00:09:57.859 "w_mbytes_per_sec": 0 00:09:57.859 }, 00:09:57.859 "claimed": false, 00:09:57.859 "zoned": false, 00:09:57.859 "supported_io_types": { 00:09:57.859 "read": true, 00:09:57.859 "write": true, 00:09:57.859 "unmap": true, 00:09:57.860 "flush": true, 00:09:57.860 "reset": true, 00:09:57.860 "nvme_admin": false, 00:09:57.860 "nvme_io": false, 00:09:57.860 "nvme_io_md": false, 00:09:57.860 "write_zeroes": true, 00:09:57.860 "zcopy": false, 00:09:57.860 "get_zone_info": false, 00:09:57.860 "zone_management": false, 00:09:57.860 "zone_append": false, 00:09:57.860 "compare": false, 00:09:57.860 "compare_and_write": false, 00:09:57.860 "abort": false, 00:09:57.860 "seek_hole": false, 00:09:57.860 "seek_data": false, 00:09:57.860 "copy": false, 00:09:57.860 "nvme_iov_md": false 00:09:57.860 }, 00:09:57.860 "memory_domains": [ 00:09:57.860 { 00:09:57.860 "dma_device_id": "system", 00:09:57.860 "dma_device_type": 1 00:09:57.860 }, 00:09:57.860 { 00:09:57.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.860 "dma_device_type": 2 00:09:57.860 }, 00:09:57.860 { 00:09:57.860 "dma_device_id": "system", 00:09:57.860 "dma_device_type": 1 00:09:57.860 }, 00:09:57.860 { 00:09:57.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.860 "dma_device_type": 2 00:09:57.860 }, 00:09:57.860 { 00:09:57.860 "dma_device_id": "system", 00:09:57.860 "dma_device_type": 1 00:09:57.860 }, 00:09:57.860 { 00:09:57.860 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:57.860 "dma_device_type": 2 00:09:57.860 } 00:09:57.860 ], 00:09:57.860 "driver_specific": { 00:09:57.860 "raid": { 00:09:57.860 "uuid": "49d8cc37-4225-11ef-aa83-81fbc7dfef58", 00:09:57.860 "strip_size_kb": 64, 00:09:57.860 "state": "online", 00:09:57.860 "raid_level": "raid0", 00:09:57.860 "superblock": true, 00:09:57.860 "num_base_bdevs": 3, 00:09:57.860 "num_base_bdevs_discovered": 3, 00:09:57.860 "num_base_bdevs_operational": 3, 00:09:57.860 "base_bdevs_list": [ 00:09:57.860 { 00:09:57.860 "name": "NewBaseBdev", 00:09:57.860 "uuid": "4b0edf8f-4225-11ef-aa83-81fbc7dfef58", 00:09:57.860 "is_configured": true, 00:09:57.860 "data_offset": 2048, 00:09:57.860 "data_size": 63488 00:09:57.860 }, 00:09:57.860 { 00:09:57.860 "name": "BaseBdev2", 00:09:57.860 "uuid": "48fb3be7-4225-11ef-aa83-81fbc7dfef58", 00:09:57.860 "is_configured": true, 00:09:57.860 "data_offset": 2048, 00:09:57.860 "data_size": 63488 00:09:57.860 }, 00:09:57.860 { 00:09:57.860 "name": "BaseBdev3", 00:09:57.860 "uuid": "4976874b-4225-11ef-aa83-81fbc7dfef58", 00:09:57.860 "is_configured": true, 00:09:57.860 "data_offset": 2048, 00:09:57.860 "data_size": 63488 00:09:57.860 } 00:09:57.860 ] 00:09:57.860 } 00:09:57.860 } 00:09:57.860 }' 00:09:57.860 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:09:57.860 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:09:57.860 BaseBdev2 00:09:57.860 BaseBdev3' 00:09:57.860 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:57.860 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:09:57.860 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:58.118 "name": "NewBaseBdev", 00:09:58.118 "aliases": [ 00:09:58.118 "4b0edf8f-4225-11ef-aa83-81fbc7dfef58" 00:09:58.118 ], 00:09:58.118 "product_name": "Malloc disk", 00:09:58.118 "block_size": 512, 00:09:58.118 "num_blocks": 65536, 00:09:58.118 "uuid": "4b0edf8f-4225-11ef-aa83-81fbc7dfef58", 00:09:58.118 "assigned_rate_limits": { 00:09:58.118 "rw_ios_per_sec": 0, 00:09:58.118 "rw_mbytes_per_sec": 0, 00:09:58.118 "r_mbytes_per_sec": 0, 00:09:58.118 "w_mbytes_per_sec": 0 00:09:58.118 }, 00:09:58.118 "claimed": true, 00:09:58.118 "claim_type": "exclusive_write", 00:09:58.118 "zoned": false, 00:09:58.118 "supported_io_types": { 00:09:58.118 "read": true, 00:09:58.118 "write": true, 00:09:58.118 "unmap": true, 00:09:58.118 "flush": true, 00:09:58.118 "reset": true, 00:09:58.118 "nvme_admin": false, 00:09:58.118 "nvme_io": false, 00:09:58.118 "nvme_io_md": false, 00:09:58.118 "write_zeroes": true, 00:09:58.118 "zcopy": true, 00:09:58.118 "get_zone_info": false, 00:09:58.118 "zone_management": false, 00:09:58.118 "zone_append": false, 00:09:58.118 "compare": false, 00:09:58.118 "compare_and_write": false, 00:09:58.118 "abort": true, 00:09:58.118 "seek_hole": false, 00:09:58.118 "seek_data": false, 00:09:58.118 "copy": true, 00:09:58.118 "nvme_iov_md": false 00:09:58.118 }, 00:09:58.118 "memory_domains": [ 00:09:58.118 { 00:09:58.118 "dma_device_id": "system", 00:09:58.118 "dma_device_type": 1 00:09:58.118 }, 00:09:58.118 { 00:09:58.118 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.118 "dma_device_type": 2 00:09:58.118 } 00:09:58.118 ], 00:09:58.118 "driver_specific": {} 00:09:58.118 }' 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:09:58.118 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:58.393 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:58.393 "name": "BaseBdev2", 00:09:58.393 "aliases": [ 00:09:58.393 "48fb3be7-4225-11ef-aa83-81fbc7dfef58" 00:09:58.393 ], 00:09:58.393 "product_name": "Malloc disk", 00:09:58.393 "block_size": 512, 00:09:58.393 "num_blocks": 65536, 00:09:58.394 "uuid": "48fb3be7-4225-11ef-aa83-81fbc7dfef58", 00:09:58.394 "assigned_rate_limits": { 00:09:58.394 "rw_ios_per_sec": 0, 00:09:58.394 "rw_mbytes_per_sec": 0, 00:09:58.394 "r_mbytes_per_sec": 0, 00:09:58.394 "w_mbytes_per_sec": 0 00:09:58.394 }, 00:09:58.394 "claimed": true, 00:09:58.394 "claim_type": "exclusive_write", 00:09:58.394 "zoned": false, 00:09:58.394 "supported_io_types": { 00:09:58.394 "read": true, 00:09:58.394 "write": true, 00:09:58.394 "unmap": true, 00:09:58.394 "flush": true, 00:09:58.394 "reset": true, 00:09:58.394 "nvme_admin": false, 00:09:58.394 "nvme_io": false, 00:09:58.394 "nvme_io_md": false, 00:09:58.394 "write_zeroes": true, 00:09:58.394 "zcopy": true, 00:09:58.394 "get_zone_info": false, 00:09:58.394 "zone_management": false, 00:09:58.394 "zone_append": false, 00:09:58.394 "compare": false, 00:09:58.394 "compare_and_write": false, 00:09:58.394 "abort": true, 00:09:58.394 "seek_hole": false, 00:09:58.394 "seek_data": false, 00:09:58.394 "copy": true, 00:09:58.394 "nvme_iov_md": false 00:09:58.394 }, 00:09:58.394 "memory_domains": [ 00:09:58.394 { 00:09:58.394 "dma_device_id": "system", 00:09:58.394 "dma_device_type": 1 00:09:58.394 }, 00:09:58.394 { 00:09:58.394 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.394 "dma_device_type": 2 00:09:58.394 } 00:09:58.394 ], 00:09:58.394 "driver_specific": {} 00:09:58.394 }' 00:09:58.394 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:58.394 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:58.394 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:58.394 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:58.394 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:58.394 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:58.394 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:58.394 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:58.394 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:58.394 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:58.673 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:58.673 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:58.673 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:09:58.674 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:09:58.674 21:09:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:09:58.674 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:09:58.674 "name": "BaseBdev3", 00:09:58.674 "aliases": [ 00:09:58.674 "4976874b-4225-11ef-aa83-81fbc7dfef58" 00:09:58.674 ], 00:09:58.674 "product_name": "Malloc disk", 00:09:58.674 "block_size": 512, 00:09:58.674 "num_blocks": 65536, 00:09:58.674 "uuid": "4976874b-4225-11ef-aa83-81fbc7dfef58", 00:09:58.674 "assigned_rate_limits": { 00:09:58.674 "rw_ios_per_sec": 0, 00:09:58.674 "rw_mbytes_per_sec": 0, 00:09:58.674 "r_mbytes_per_sec": 0, 00:09:58.674 "w_mbytes_per_sec": 0 00:09:58.674 }, 00:09:58.674 "claimed": true, 00:09:58.674 "claim_type": "exclusive_write", 00:09:58.674 "zoned": false, 00:09:58.674 "supported_io_types": { 00:09:58.674 "read": true, 00:09:58.674 "write": true, 00:09:58.674 "unmap": true, 00:09:58.674 "flush": true, 00:09:58.674 "reset": true, 00:09:58.674 "nvme_admin": false, 00:09:58.674 "nvme_io": false, 00:09:58.674 "nvme_io_md": false, 00:09:58.674 "write_zeroes": true, 00:09:58.674 "zcopy": true, 00:09:58.674 "get_zone_info": false, 00:09:58.674 "zone_management": false, 00:09:58.674 "zone_append": false, 00:09:58.674 "compare": false, 00:09:58.674 "compare_and_write": false, 00:09:58.674 "abort": true, 00:09:58.674 "seek_hole": false, 00:09:58.674 "seek_data": false, 00:09:58.674 "copy": true, 00:09:58.674 "nvme_iov_md": false 00:09:58.674 }, 00:09:58.674 "memory_domains": [ 00:09:58.674 { 00:09:58.674 "dma_device_id": "system", 00:09:58.674 "dma_device_type": 1 00:09:58.674 }, 00:09:58.674 { 00:09:58.674 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:58.674 "dma_device_type": 2 00:09:58.674 } 00:09:58.674 ], 00:09:58.674 "driver_specific": {} 00:09:58.674 }' 00:09:58.674 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:58.674 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:09:58.931 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:09:58.931 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:58.931 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:09:58.931 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:09:58.931 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:58.931 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:09:58.931 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:09:58.931 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:58.931 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:09:58.931 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:09:58.931 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:09:59.188 [2024-07-14 21:09:10.531321] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:09:59.188 [2024-07-14 21:09:10.531352] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:09:59.188 [2024-07-14 21:09:10.531379] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:09:59.188 [2024-07-14 21:09:10.531394] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:09:59.188 [2024-07-14 21:09:10.531399] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3283b6c34a00 name Existed_Raid, state offline 00:09:59.188 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 52648 00:09:59.188 21:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 52648 ']' 00:09:59.188 21:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 52648 00:09:59.188 21:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:09:59.188 21:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:09:59.188 21:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:09:59.188 21:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 52648 00:09:59.188 21:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:09:59.188 killing process with pid 52648 00:09:59.188 21:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:09:59.188 21:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 52648' 00:09:59.188 21:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 52648 00:09:59.188 [2024-07-14 21:09:10.556275] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:09:59.188 21:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 52648 00:09:59.188 [2024-07-14 21:09:10.581549] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:09:59.445 21:09:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:09:59.445 00:09:59.445 real 0m23.682s 00:09:59.445 user 0m43.079s 00:09:59.445 sys 0m3.397s 00:09:59.445 21:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:59.445 21:09:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:09:59.445 ************************************ 00:09:59.445 END TEST raid_state_function_test_sb 00:09:59.445 ************************************ 00:09:59.445 21:09:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:09:59.445 21:09:10 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 3 00:09:59.445 21:09:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:09:59.445 21:09:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:59.445 21:09:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:09:59.445 ************************************ 00:09:59.445 START TEST raid_superblock_test 00:09:59.445 ************************************ 00:09:59.445 21:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 3 00:09:59.445 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:09:59.445 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:09:59.445 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:09:59.445 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:09:59.445 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:09:59.445 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:09:59.445 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:09:59.445 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:09:59.445 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:09:59.445 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:09:59.445 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:09:59.445 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:09:59.445 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:09:59.445 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:09:59.446 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:09:59.446 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:09:59.446 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=53376 00:09:59.446 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 53376 /var/tmp/spdk-raid.sock 00:09:59.446 21:09:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:09:59.446 21:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 53376 ']' 00:09:59.446 21:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:09:59.446 21:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:59.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:09:59.446 21:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:09:59.446 21:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:59.446 21:09:10 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:09:59.446 [2024-07-14 21:09:10.877560] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:09:59.446 [2024-07-14 21:09:10.877724] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:00.012 EAL: TSC is not safe to use in SMP mode 00:10:00.012 EAL: TSC is not invariant 00:10:00.012 [2024-07-14 21:09:11.404942] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:00.012 [2024-07-14 21:09:11.510851] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:00.012 [2024-07-14 21:09:11.513414] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:00.012 [2024-07-14 21:09:11.514369] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.012 [2024-07-14 21:09:11.514387] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:00.578 21:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:00.578 21:09:11 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:10:00.578 21:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:10:00.578 21:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:00.578 21:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:10:00.578 21:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:10:00.578 21:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:10:00.578 21:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:00.578 21:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:00.578 21:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:00.578 21:09:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:10:00.578 malloc1 00:10:00.836 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:00.837 [2024-07-14 21:09:12.371918] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:00.837 [2024-07-14 21:09:12.371974] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:00.837 [2024-07-14 21:09:12.372000] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x139e8e834780 00:10:00.837 [2024-07-14 21:09:12.372008] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:00.837 [2024-07-14 21:09:12.372934] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:00.837 [2024-07-14 21:09:12.372974] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:00.837 pt1 00:10:01.096 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:01.096 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:01.096 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:10:01.096 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:10:01.096 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:10:01.096 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:01.096 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:01.096 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:01.096 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:10:01.096 malloc2 00:10:01.096 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:01.355 [2024-07-14 21:09:12.811967] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:01.355 [2024-07-14 21:09:12.812022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.355 [2024-07-14 21:09:12.812049] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x139e8e834c80 00:10:01.355 [2024-07-14 21:09:12.812055] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.355 [2024-07-14 21:09:12.812722] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.355 [2024-07-14 21:09:12.812753] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:01.355 pt2 00:10:01.355 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:01.355 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:01.355 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:10:01.355 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:10:01.355 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:10:01.355 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:10:01.355 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:10:01.355 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:10:01.355 21:09:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:10:01.614 malloc3 00:10:01.614 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:01.885 [2024-07-14 21:09:13.279961] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:01.885 [2024-07-14 21:09:13.280012] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:01.885 [2024-07-14 21:09:13.280040] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x139e8e835180 00:10:01.885 [2024-07-14 21:09:13.280047] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:01.885 [2024-07-14 21:09:13.280650] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:01.885 [2024-07-14 21:09:13.280672] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:01.885 pt3 00:10:01.885 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:10:01.885 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:10:01.885 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:10:02.151 [2024-07-14 21:09:13.551977] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:02.151 [2024-07-14 21:09:13.552524] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:02.151 [2024-07-14 21:09:13.552545] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:02.151 [2024-07-14 21:09:13.552592] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x139e8e835400 00:10:02.151 [2024-07-14 21:09:13.552598] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:02.151 [2024-07-14 21:09:13.552628] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x139e8e897e20 00:10:02.151 [2024-07-14 21:09:13.552698] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x139e8e835400 00:10:02.151 [2024-07-14 21:09:13.552703] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x139e8e835400 00:10:02.151 [2024-07-14 21:09:13.552728] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:02.151 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:02.151 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:02.151 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:02.151 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:02.151 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:02.151 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:02.151 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:02.151 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:02.151 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:02.151 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:02.151 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:02.151 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:02.408 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:02.408 "name": "raid_bdev1", 00:10:02.408 "uuid": "52679bdf-4225-11ef-aa83-81fbc7dfef58", 00:10:02.408 "strip_size_kb": 64, 00:10:02.409 "state": "online", 00:10:02.409 "raid_level": "raid0", 00:10:02.409 "superblock": true, 00:10:02.409 "num_base_bdevs": 3, 00:10:02.409 "num_base_bdevs_discovered": 3, 00:10:02.409 "num_base_bdevs_operational": 3, 00:10:02.409 "base_bdevs_list": [ 00:10:02.409 { 00:10:02.409 "name": "pt1", 00:10:02.409 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.409 "is_configured": true, 00:10:02.409 "data_offset": 2048, 00:10:02.409 "data_size": 63488 00:10:02.409 }, 00:10:02.409 { 00:10:02.409 "name": "pt2", 00:10:02.409 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.409 "is_configured": true, 00:10:02.409 "data_offset": 2048, 00:10:02.409 "data_size": 63488 00:10:02.409 }, 00:10:02.409 { 00:10:02.409 "name": "pt3", 00:10:02.409 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:02.409 "is_configured": true, 00:10:02.409 "data_offset": 2048, 00:10:02.409 "data_size": 63488 00:10:02.409 } 00:10:02.409 ] 00:10:02.409 }' 00:10:02.409 21:09:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:02.409 21:09:13 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:02.666 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:10:02.666 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:02.666 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:02.666 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:02.666 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:02.666 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:02.666 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:02.666 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:02.924 [2024-07-14 21:09:14.396056] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:02.924 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:02.924 "name": "raid_bdev1", 00:10:02.924 "aliases": [ 00:10:02.924 "52679bdf-4225-11ef-aa83-81fbc7dfef58" 00:10:02.924 ], 00:10:02.924 "product_name": "Raid Volume", 00:10:02.924 "block_size": 512, 00:10:02.924 "num_blocks": 190464, 00:10:02.924 "uuid": "52679bdf-4225-11ef-aa83-81fbc7dfef58", 00:10:02.924 "assigned_rate_limits": { 00:10:02.924 "rw_ios_per_sec": 0, 00:10:02.924 "rw_mbytes_per_sec": 0, 00:10:02.924 "r_mbytes_per_sec": 0, 00:10:02.924 "w_mbytes_per_sec": 0 00:10:02.924 }, 00:10:02.924 "claimed": false, 00:10:02.924 "zoned": false, 00:10:02.924 "supported_io_types": { 00:10:02.924 "read": true, 00:10:02.924 "write": true, 00:10:02.924 "unmap": true, 00:10:02.924 "flush": true, 00:10:02.924 "reset": true, 00:10:02.924 "nvme_admin": false, 00:10:02.924 "nvme_io": false, 00:10:02.924 "nvme_io_md": false, 00:10:02.924 "write_zeroes": true, 00:10:02.924 "zcopy": false, 00:10:02.924 "get_zone_info": false, 00:10:02.924 "zone_management": false, 00:10:02.924 "zone_append": false, 00:10:02.924 "compare": false, 00:10:02.924 "compare_and_write": false, 00:10:02.924 "abort": false, 00:10:02.924 "seek_hole": false, 00:10:02.924 "seek_data": false, 00:10:02.924 "copy": false, 00:10:02.924 "nvme_iov_md": false 00:10:02.924 }, 00:10:02.924 "memory_domains": [ 00:10:02.924 { 00:10:02.924 "dma_device_id": "system", 00:10:02.924 "dma_device_type": 1 00:10:02.924 }, 00:10:02.924 { 00:10:02.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.924 "dma_device_type": 2 00:10:02.924 }, 00:10:02.924 { 00:10:02.924 "dma_device_id": "system", 00:10:02.924 "dma_device_type": 1 00:10:02.924 }, 00:10:02.924 { 00:10:02.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.924 "dma_device_type": 2 00:10:02.924 }, 00:10:02.924 { 00:10:02.924 "dma_device_id": "system", 00:10:02.924 "dma_device_type": 1 00:10:02.924 }, 00:10:02.924 { 00:10:02.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:02.924 "dma_device_type": 2 00:10:02.924 } 00:10:02.924 ], 00:10:02.924 "driver_specific": { 00:10:02.924 "raid": { 00:10:02.924 "uuid": "52679bdf-4225-11ef-aa83-81fbc7dfef58", 00:10:02.924 "strip_size_kb": 64, 00:10:02.924 "state": "online", 00:10:02.924 "raid_level": "raid0", 00:10:02.924 "superblock": true, 00:10:02.924 "num_base_bdevs": 3, 00:10:02.924 "num_base_bdevs_discovered": 3, 00:10:02.924 "num_base_bdevs_operational": 3, 00:10:02.924 "base_bdevs_list": [ 00:10:02.924 { 00:10:02.924 "name": "pt1", 00:10:02.924 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:02.924 "is_configured": true, 00:10:02.924 "data_offset": 2048, 00:10:02.924 "data_size": 63488 00:10:02.924 }, 00:10:02.924 { 00:10:02.924 "name": "pt2", 00:10:02.924 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:02.924 "is_configured": true, 00:10:02.924 "data_offset": 2048, 00:10:02.924 "data_size": 63488 00:10:02.924 }, 00:10:02.924 { 00:10:02.924 "name": "pt3", 00:10:02.924 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:02.924 "is_configured": true, 00:10:02.924 "data_offset": 2048, 00:10:02.924 "data_size": 63488 00:10:02.924 } 00:10:02.924 ] 00:10:02.924 } 00:10:02.924 } 00:10:02.924 }' 00:10:02.924 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:02.924 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:02.924 pt2 00:10:02.924 pt3' 00:10:02.924 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:02.924 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:02.924 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:03.184 "name": "pt1", 00:10:03.184 "aliases": [ 00:10:03.184 "00000000-0000-0000-0000-000000000001" 00:10:03.184 ], 00:10:03.184 "product_name": "passthru", 00:10:03.184 "block_size": 512, 00:10:03.184 "num_blocks": 65536, 00:10:03.184 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:03.184 "assigned_rate_limits": { 00:10:03.184 "rw_ios_per_sec": 0, 00:10:03.184 "rw_mbytes_per_sec": 0, 00:10:03.184 "r_mbytes_per_sec": 0, 00:10:03.184 "w_mbytes_per_sec": 0 00:10:03.184 }, 00:10:03.184 "claimed": true, 00:10:03.184 "claim_type": "exclusive_write", 00:10:03.184 "zoned": false, 00:10:03.184 "supported_io_types": { 00:10:03.184 "read": true, 00:10:03.184 "write": true, 00:10:03.184 "unmap": true, 00:10:03.184 "flush": true, 00:10:03.184 "reset": true, 00:10:03.184 "nvme_admin": false, 00:10:03.184 "nvme_io": false, 00:10:03.184 "nvme_io_md": false, 00:10:03.184 "write_zeroes": true, 00:10:03.184 "zcopy": true, 00:10:03.184 "get_zone_info": false, 00:10:03.184 "zone_management": false, 00:10:03.184 "zone_append": false, 00:10:03.184 "compare": false, 00:10:03.184 "compare_and_write": false, 00:10:03.184 "abort": true, 00:10:03.184 "seek_hole": false, 00:10:03.184 "seek_data": false, 00:10:03.184 "copy": true, 00:10:03.184 "nvme_iov_md": false 00:10:03.184 }, 00:10:03.184 "memory_domains": [ 00:10:03.184 { 00:10:03.184 "dma_device_id": "system", 00:10:03.184 "dma_device_type": 1 00:10:03.184 }, 00:10:03.184 { 00:10:03.184 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.184 "dma_device_type": 2 00:10:03.184 } 00:10:03.184 ], 00:10:03.184 "driver_specific": { 00:10:03.184 "passthru": { 00:10:03.184 "name": "pt1", 00:10:03.184 "base_bdev_name": "malloc1" 00:10:03.184 } 00:10:03.184 } 00:10:03.184 }' 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:03.184 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:03.443 "name": "pt2", 00:10:03.443 "aliases": [ 00:10:03.443 "00000000-0000-0000-0000-000000000002" 00:10:03.443 ], 00:10:03.443 "product_name": "passthru", 00:10:03.443 "block_size": 512, 00:10:03.443 "num_blocks": 65536, 00:10:03.443 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:03.443 "assigned_rate_limits": { 00:10:03.443 "rw_ios_per_sec": 0, 00:10:03.443 "rw_mbytes_per_sec": 0, 00:10:03.443 "r_mbytes_per_sec": 0, 00:10:03.443 "w_mbytes_per_sec": 0 00:10:03.443 }, 00:10:03.443 "claimed": true, 00:10:03.443 "claim_type": "exclusive_write", 00:10:03.443 "zoned": false, 00:10:03.443 "supported_io_types": { 00:10:03.443 "read": true, 00:10:03.443 "write": true, 00:10:03.443 "unmap": true, 00:10:03.443 "flush": true, 00:10:03.443 "reset": true, 00:10:03.443 "nvme_admin": false, 00:10:03.443 "nvme_io": false, 00:10:03.443 "nvme_io_md": false, 00:10:03.443 "write_zeroes": true, 00:10:03.443 "zcopy": true, 00:10:03.443 "get_zone_info": false, 00:10:03.443 "zone_management": false, 00:10:03.443 "zone_append": false, 00:10:03.443 "compare": false, 00:10:03.443 "compare_and_write": false, 00:10:03.443 "abort": true, 00:10:03.443 "seek_hole": false, 00:10:03.443 "seek_data": false, 00:10:03.443 "copy": true, 00:10:03.443 "nvme_iov_md": false 00:10:03.443 }, 00:10:03.443 "memory_domains": [ 00:10:03.443 { 00:10:03.443 "dma_device_id": "system", 00:10:03.443 "dma_device_type": 1 00:10:03.443 }, 00:10:03.443 { 00:10:03.443 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:03.443 "dma_device_type": 2 00:10:03.443 } 00:10:03.443 ], 00:10:03.443 "driver_specific": { 00:10:03.443 "passthru": { 00:10:03.443 "name": "pt2", 00:10:03.443 "base_bdev_name": "malloc2" 00:10:03.443 } 00:10:03.443 } 00:10:03.443 }' 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:10:03.443 21:09:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:04.011 "name": "pt3", 00:10:04.011 "aliases": [ 00:10:04.011 "00000000-0000-0000-0000-000000000003" 00:10:04.011 ], 00:10:04.011 "product_name": "passthru", 00:10:04.011 "block_size": 512, 00:10:04.011 "num_blocks": 65536, 00:10:04.011 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:04.011 "assigned_rate_limits": { 00:10:04.011 "rw_ios_per_sec": 0, 00:10:04.011 "rw_mbytes_per_sec": 0, 00:10:04.011 "r_mbytes_per_sec": 0, 00:10:04.011 "w_mbytes_per_sec": 0 00:10:04.011 }, 00:10:04.011 "claimed": true, 00:10:04.011 "claim_type": "exclusive_write", 00:10:04.011 "zoned": false, 00:10:04.011 "supported_io_types": { 00:10:04.011 "read": true, 00:10:04.011 "write": true, 00:10:04.011 "unmap": true, 00:10:04.011 "flush": true, 00:10:04.011 "reset": true, 00:10:04.011 "nvme_admin": false, 00:10:04.011 "nvme_io": false, 00:10:04.011 "nvme_io_md": false, 00:10:04.011 "write_zeroes": true, 00:10:04.011 "zcopy": true, 00:10:04.011 "get_zone_info": false, 00:10:04.011 "zone_management": false, 00:10:04.011 "zone_append": false, 00:10:04.011 "compare": false, 00:10:04.011 "compare_and_write": false, 00:10:04.011 "abort": true, 00:10:04.011 "seek_hole": false, 00:10:04.011 "seek_data": false, 00:10:04.011 "copy": true, 00:10:04.011 "nvme_iov_md": false 00:10:04.011 }, 00:10:04.011 "memory_domains": [ 00:10:04.011 { 00:10:04.011 "dma_device_id": "system", 00:10:04.011 "dma_device_type": 1 00:10:04.011 }, 00:10:04.011 { 00:10:04.011 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:04.011 "dma_device_type": 2 00:10:04.011 } 00:10:04.011 ], 00:10:04.011 "driver_specific": { 00:10:04.011 "passthru": { 00:10:04.011 "name": "pt3", 00:10:04.011 "base_bdev_name": "malloc3" 00:10:04.011 } 00:10:04.011 } 00:10:04.011 }' 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:10:04.011 [2024-07-14 21:09:15.508132] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=52679bdf-4225-11ef-aa83-81fbc7dfef58 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 52679bdf-4225-11ef-aa83-81fbc7dfef58 ']' 00:10:04.011 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:04.269 [2024-07-14 21:09:15.804135] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:04.269 [2024-07-14 21:09:15.804153] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:04.269 [2024-07-14 21:09:15.804176] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:04.269 [2024-07-14 21:09:15.804190] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:04.269 [2024-07-14 21:09:15.804195] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x139e8e835400 name raid_bdev1, state offline 00:10:04.527 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:04.527 21:09:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:10:04.527 21:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:10:04.527 21:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:10:04.527 21:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:04.527 21:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:10:04.786 21:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:04.786 21:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:05.045 21:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:10:05.045 21:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:10:05.303 21:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:10:05.303 21:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:10:05.562 21:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:10:05.562 21:09:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:05.562 21:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:10:05.562 21:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:05.562 21:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:05.562 21:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.562 21:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:05.562 21:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.562 21:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:05.562 21:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:10:05.562 21:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:10:05.562 21:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:10:05.562 21:09:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:10:05.820 [2024-07-14 21:09:17.192164] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:10:05.820 [2024-07-14 21:09:17.192835] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:10:05.820 [2024-07-14 21:09:17.192854] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:10:05.820 [2024-07-14 21:09:17.192869] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:10:05.820 [2024-07-14 21:09:17.192909] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:10:05.820 [2024-07-14 21:09:17.192921] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:10:05.820 [2024-07-14 21:09:17.192930] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:05.820 [2024-07-14 21:09:17.192934] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x139e8e835180 name raid_bdev1, state configuring 00:10:05.820 request: 00:10:05.820 { 00:10:05.820 "name": "raid_bdev1", 00:10:05.820 "raid_level": "raid0", 00:10:05.820 "base_bdevs": [ 00:10:05.820 "malloc1", 00:10:05.820 "malloc2", 00:10:05.820 "malloc3" 00:10:05.820 ], 00:10:05.821 "strip_size_kb": 64, 00:10:05.821 "superblock": false, 00:10:05.821 "method": "bdev_raid_create", 00:10:05.821 "req_id": 1 00:10:05.821 } 00:10:05.821 Got JSON-RPC error response 00:10:05.821 response: 00:10:05.821 { 00:10:05.821 "code": -17, 00:10:05.821 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:10:05.821 } 00:10:05.821 21:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:10:05.821 21:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:10:05.821 21:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:10:05.821 21:09:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:10:05.821 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:05.821 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:10:06.079 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:10:06.079 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:10:06.079 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:10:06.338 [2024-07-14 21:09:17.736195] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:10:06.338 [2024-07-14 21:09:17.736251] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:06.338 [2024-07-14 21:09:17.736279] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x139e8e834c80 00:10:06.338 [2024-07-14 21:09:17.736287] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:06.338 [2024-07-14 21:09:17.737023] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:06.338 [2024-07-14 21:09:17.737048] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:10:06.338 [2024-07-14 21:09:17.737072] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:10:06.338 [2024-07-14 21:09:17.737095] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:10:06.338 pt1 00:10:06.338 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:06.338 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:06.338 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:06.338 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:06.338 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:06.338 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:06.338 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:06.338 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:06.338 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:06.338 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:06.338 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:06.338 21:09:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:06.597 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:06.597 "name": "raid_bdev1", 00:10:06.597 "uuid": "52679bdf-4225-11ef-aa83-81fbc7dfef58", 00:10:06.597 "strip_size_kb": 64, 00:10:06.597 "state": "configuring", 00:10:06.597 "raid_level": "raid0", 00:10:06.597 "superblock": true, 00:10:06.597 "num_base_bdevs": 3, 00:10:06.597 "num_base_bdevs_discovered": 1, 00:10:06.597 "num_base_bdevs_operational": 3, 00:10:06.597 "base_bdevs_list": [ 00:10:06.597 { 00:10:06.597 "name": "pt1", 00:10:06.597 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:06.597 "is_configured": true, 00:10:06.597 "data_offset": 2048, 00:10:06.597 "data_size": 63488 00:10:06.597 }, 00:10:06.597 { 00:10:06.597 "name": null, 00:10:06.597 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:06.597 "is_configured": false, 00:10:06.597 "data_offset": 2048, 00:10:06.597 "data_size": 63488 00:10:06.597 }, 00:10:06.597 { 00:10:06.597 "name": null, 00:10:06.597 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:06.597 "is_configured": false, 00:10:06.597 "data_offset": 2048, 00:10:06.597 "data_size": 63488 00:10:06.597 } 00:10:06.597 ] 00:10:06.597 }' 00:10:06.597 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:06.597 21:09:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:06.856 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:10:06.856 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:07.114 [2024-07-14 21:09:18.572212] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:07.114 [2024-07-14 21:09:18.572287] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:07.115 [2024-07-14 21:09:18.572297] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x139e8e835680 00:10:07.115 [2024-07-14 21:09:18.572304] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:07.115 [2024-07-14 21:09:18.572418] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:07.115 [2024-07-14 21:09:18.572428] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:07.115 [2024-07-14 21:09:18.572467] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:07.115 [2024-07-14 21:09:18.572475] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:07.115 pt2 00:10:07.115 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:10:07.373 [2024-07-14 21:09:18.836249] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:10:07.373 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 3 00:10:07.373 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:07.373 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:07.373 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:07.373 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:07.373 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:07.373 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:07.373 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:07.373 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:07.373 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:07.373 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:07.373 21:09:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:07.631 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:07.631 "name": "raid_bdev1", 00:10:07.631 "uuid": "52679bdf-4225-11ef-aa83-81fbc7dfef58", 00:10:07.631 "strip_size_kb": 64, 00:10:07.631 "state": "configuring", 00:10:07.631 "raid_level": "raid0", 00:10:07.631 "superblock": true, 00:10:07.631 "num_base_bdevs": 3, 00:10:07.631 "num_base_bdevs_discovered": 1, 00:10:07.631 "num_base_bdevs_operational": 3, 00:10:07.631 "base_bdevs_list": [ 00:10:07.631 { 00:10:07.631 "name": "pt1", 00:10:07.631 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:07.631 "is_configured": true, 00:10:07.631 "data_offset": 2048, 00:10:07.631 "data_size": 63488 00:10:07.631 }, 00:10:07.631 { 00:10:07.631 "name": null, 00:10:07.631 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:07.631 "is_configured": false, 00:10:07.631 "data_offset": 2048, 00:10:07.631 "data_size": 63488 00:10:07.631 }, 00:10:07.631 { 00:10:07.631 "name": null, 00:10:07.631 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:07.631 "is_configured": false, 00:10:07.631 "data_offset": 2048, 00:10:07.631 "data_size": 63488 00:10:07.631 } 00:10:07.631 ] 00:10:07.631 }' 00:10:07.631 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:07.631 21:09:19 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:07.890 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:10:07.890 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:07.890 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:10:08.151 [2024-07-14 21:09:19.660405] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:10:08.151 [2024-07-14 21:09:19.660482] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.151 [2024-07-14 21:09:19.660492] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x139e8e835680 00:10:08.151 [2024-07-14 21:09:19.660499] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.151 [2024-07-14 21:09:19.660610] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.151 [2024-07-14 21:09:19.660620] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:10:08.151 [2024-07-14 21:09:19.660659] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:10:08.151 [2024-07-14 21:09:19.660667] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:10:08.151 pt2 00:10:08.151 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:08.151 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:08.151 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:10:08.409 [2024-07-14 21:09:19.880420] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:10:08.409 [2024-07-14 21:09:19.880474] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:08.409 [2024-07-14 21:09:19.880499] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x139e8e835400 00:10:08.409 [2024-07-14 21:09:19.880506] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:08.409 [2024-07-14 21:09:19.880636] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:08.409 [2024-07-14 21:09:19.880661] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:10:08.409 [2024-07-14 21:09:19.880697] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:10:08.409 [2024-07-14 21:09:19.880704] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:10:08.409 [2024-07-14 21:09:19.880734] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x139e8e834780 00:10:08.409 [2024-07-14 21:09:19.880738] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:08.409 [2024-07-14 21:09:19.880758] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x139e8e897e20 00:10:08.409 [2024-07-14 21:09:19.880866] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x139e8e834780 00:10:08.409 [2024-07-14 21:09:19.880871] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x139e8e834780 00:10:08.409 [2024-07-14 21:09:19.880905] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:08.409 pt3 00:10:08.409 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:10:08.409 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:10:08.409 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:08.409 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:08.409 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:08.409 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:08.409 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:08.409 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:08.409 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:08.410 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:08.410 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:08.410 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:08.410 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:08.410 21:09:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:08.668 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:08.668 "name": "raid_bdev1", 00:10:08.668 "uuid": "52679bdf-4225-11ef-aa83-81fbc7dfef58", 00:10:08.668 "strip_size_kb": 64, 00:10:08.668 "state": "online", 00:10:08.668 "raid_level": "raid0", 00:10:08.668 "superblock": true, 00:10:08.668 "num_base_bdevs": 3, 00:10:08.668 "num_base_bdevs_discovered": 3, 00:10:08.668 "num_base_bdevs_operational": 3, 00:10:08.668 "base_bdevs_list": [ 00:10:08.668 { 00:10:08.668 "name": "pt1", 00:10:08.668 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:08.668 "is_configured": true, 00:10:08.668 "data_offset": 2048, 00:10:08.668 "data_size": 63488 00:10:08.668 }, 00:10:08.668 { 00:10:08.668 "name": "pt2", 00:10:08.668 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:08.668 "is_configured": true, 00:10:08.668 "data_offset": 2048, 00:10:08.668 "data_size": 63488 00:10:08.668 }, 00:10:08.668 { 00:10:08.668 "name": "pt3", 00:10:08.668 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:08.668 "is_configured": true, 00:10:08.668 "data_offset": 2048, 00:10:08.668 "data_size": 63488 00:10:08.668 } 00:10:08.668 ] 00:10:08.668 }' 00:10:08.668 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:08.668 21:09:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:09.233 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:10:09.233 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:10:09.233 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:09.233 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:09.233 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:09.233 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:09.233 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:09.233 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:09.233 [2024-07-14 21:09:20.780563] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:09.491 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:09.491 "name": "raid_bdev1", 00:10:09.491 "aliases": [ 00:10:09.491 "52679bdf-4225-11ef-aa83-81fbc7dfef58" 00:10:09.491 ], 00:10:09.491 "product_name": "Raid Volume", 00:10:09.491 "block_size": 512, 00:10:09.491 "num_blocks": 190464, 00:10:09.491 "uuid": "52679bdf-4225-11ef-aa83-81fbc7dfef58", 00:10:09.491 "assigned_rate_limits": { 00:10:09.491 "rw_ios_per_sec": 0, 00:10:09.491 "rw_mbytes_per_sec": 0, 00:10:09.491 "r_mbytes_per_sec": 0, 00:10:09.491 "w_mbytes_per_sec": 0 00:10:09.491 }, 00:10:09.491 "claimed": false, 00:10:09.491 "zoned": false, 00:10:09.491 "supported_io_types": { 00:10:09.491 "read": true, 00:10:09.491 "write": true, 00:10:09.491 "unmap": true, 00:10:09.491 "flush": true, 00:10:09.491 "reset": true, 00:10:09.491 "nvme_admin": false, 00:10:09.491 "nvme_io": false, 00:10:09.491 "nvme_io_md": false, 00:10:09.491 "write_zeroes": true, 00:10:09.491 "zcopy": false, 00:10:09.491 "get_zone_info": false, 00:10:09.491 "zone_management": false, 00:10:09.491 "zone_append": false, 00:10:09.491 "compare": false, 00:10:09.491 "compare_and_write": false, 00:10:09.491 "abort": false, 00:10:09.491 "seek_hole": false, 00:10:09.491 "seek_data": false, 00:10:09.491 "copy": false, 00:10:09.491 "nvme_iov_md": false 00:10:09.491 }, 00:10:09.491 "memory_domains": [ 00:10:09.491 { 00:10:09.491 "dma_device_id": "system", 00:10:09.491 "dma_device_type": 1 00:10:09.491 }, 00:10:09.491 { 00:10:09.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.491 "dma_device_type": 2 00:10:09.491 }, 00:10:09.491 { 00:10:09.491 "dma_device_id": "system", 00:10:09.491 "dma_device_type": 1 00:10:09.491 }, 00:10:09.491 { 00:10:09.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.491 "dma_device_type": 2 00:10:09.491 }, 00:10:09.491 { 00:10:09.491 "dma_device_id": "system", 00:10:09.491 "dma_device_type": 1 00:10:09.491 }, 00:10:09.491 { 00:10:09.491 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.491 "dma_device_type": 2 00:10:09.491 } 00:10:09.491 ], 00:10:09.491 "driver_specific": { 00:10:09.491 "raid": { 00:10:09.491 "uuid": "52679bdf-4225-11ef-aa83-81fbc7dfef58", 00:10:09.491 "strip_size_kb": 64, 00:10:09.491 "state": "online", 00:10:09.491 "raid_level": "raid0", 00:10:09.491 "superblock": true, 00:10:09.491 "num_base_bdevs": 3, 00:10:09.491 "num_base_bdevs_discovered": 3, 00:10:09.491 "num_base_bdevs_operational": 3, 00:10:09.491 "base_bdevs_list": [ 00:10:09.491 { 00:10:09.491 "name": "pt1", 00:10:09.491 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.491 "is_configured": true, 00:10:09.491 "data_offset": 2048, 00:10:09.491 "data_size": 63488 00:10:09.491 }, 00:10:09.491 { 00:10:09.491 "name": "pt2", 00:10:09.491 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:09.491 "is_configured": true, 00:10:09.491 "data_offset": 2048, 00:10:09.491 "data_size": 63488 00:10:09.491 }, 00:10:09.491 { 00:10:09.491 "name": "pt3", 00:10:09.491 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:09.491 "is_configured": true, 00:10:09.491 "data_offset": 2048, 00:10:09.491 "data_size": 63488 00:10:09.491 } 00:10:09.491 ] 00:10:09.491 } 00:10:09.491 } 00:10:09.491 }' 00:10:09.491 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:09.491 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:10:09.491 pt2 00:10:09.491 pt3' 00:10:09.491 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:09.491 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:10:09.491 21:09:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:09.749 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:09.749 "name": "pt1", 00:10:09.749 "aliases": [ 00:10:09.749 "00000000-0000-0000-0000-000000000001" 00:10:09.749 ], 00:10:09.749 "product_name": "passthru", 00:10:09.749 "block_size": 512, 00:10:09.749 "num_blocks": 65536, 00:10:09.749 "uuid": "00000000-0000-0000-0000-000000000001", 00:10:09.749 "assigned_rate_limits": { 00:10:09.749 "rw_ios_per_sec": 0, 00:10:09.749 "rw_mbytes_per_sec": 0, 00:10:09.749 "r_mbytes_per_sec": 0, 00:10:09.749 "w_mbytes_per_sec": 0 00:10:09.749 }, 00:10:09.750 "claimed": true, 00:10:09.750 "claim_type": "exclusive_write", 00:10:09.750 "zoned": false, 00:10:09.750 "supported_io_types": { 00:10:09.750 "read": true, 00:10:09.750 "write": true, 00:10:09.750 "unmap": true, 00:10:09.750 "flush": true, 00:10:09.750 "reset": true, 00:10:09.750 "nvme_admin": false, 00:10:09.750 "nvme_io": false, 00:10:09.750 "nvme_io_md": false, 00:10:09.750 "write_zeroes": true, 00:10:09.750 "zcopy": true, 00:10:09.750 "get_zone_info": false, 00:10:09.750 "zone_management": false, 00:10:09.750 "zone_append": false, 00:10:09.750 "compare": false, 00:10:09.750 "compare_and_write": false, 00:10:09.750 "abort": true, 00:10:09.750 "seek_hole": false, 00:10:09.750 "seek_data": false, 00:10:09.750 "copy": true, 00:10:09.750 "nvme_iov_md": false 00:10:09.750 }, 00:10:09.750 "memory_domains": [ 00:10:09.750 { 00:10:09.750 "dma_device_id": "system", 00:10:09.750 "dma_device_type": 1 00:10:09.750 }, 00:10:09.750 { 00:10:09.750 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:09.750 "dma_device_type": 2 00:10:09.750 } 00:10:09.750 ], 00:10:09.750 "driver_specific": { 00:10:09.750 "passthru": { 00:10:09.750 "name": "pt1", 00:10:09.750 "base_bdev_name": "malloc1" 00:10:09.750 } 00:10:09.750 } 00:10:09.750 }' 00:10:09.750 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:09.750 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:09.750 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:09.750 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:09.750 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:09.750 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:09.750 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:09.750 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:09.750 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:09.750 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:09.750 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:09.750 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:09.750 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:09.750 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:10:09.750 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:10.009 "name": "pt2", 00:10:10.009 "aliases": [ 00:10:10.009 "00000000-0000-0000-0000-000000000002" 00:10:10.009 ], 00:10:10.009 "product_name": "passthru", 00:10:10.009 "block_size": 512, 00:10:10.009 "num_blocks": 65536, 00:10:10.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:10:10.009 "assigned_rate_limits": { 00:10:10.009 "rw_ios_per_sec": 0, 00:10:10.009 "rw_mbytes_per_sec": 0, 00:10:10.009 "r_mbytes_per_sec": 0, 00:10:10.009 "w_mbytes_per_sec": 0 00:10:10.009 }, 00:10:10.009 "claimed": true, 00:10:10.009 "claim_type": "exclusive_write", 00:10:10.009 "zoned": false, 00:10:10.009 "supported_io_types": { 00:10:10.009 "read": true, 00:10:10.009 "write": true, 00:10:10.009 "unmap": true, 00:10:10.009 "flush": true, 00:10:10.009 "reset": true, 00:10:10.009 "nvme_admin": false, 00:10:10.009 "nvme_io": false, 00:10:10.009 "nvme_io_md": false, 00:10:10.009 "write_zeroes": true, 00:10:10.009 "zcopy": true, 00:10:10.009 "get_zone_info": false, 00:10:10.009 "zone_management": false, 00:10:10.009 "zone_append": false, 00:10:10.009 "compare": false, 00:10:10.009 "compare_and_write": false, 00:10:10.009 "abort": true, 00:10:10.009 "seek_hole": false, 00:10:10.009 "seek_data": false, 00:10:10.009 "copy": true, 00:10:10.009 "nvme_iov_md": false 00:10:10.009 }, 00:10:10.009 "memory_domains": [ 00:10:10.009 { 00:10:10.009 "dma_device_id": "system", 00:10:10.009 "dma_device_type": 1 00:10:10.009 }, 00:10:10.009 { 00:10:10.009 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.009 "dma_device_type": 2 00:10:10.009 } 00:10:10.009 ], 00:10:10.009 "driver_specific": { 00:10:10.009 "passthru": { 00:10:10.009 "name": "pt2", 00:10:10.009 "base_bdev_name": "malloc2" 00:10:10.009 } 00:10:10.009 } 00:10:10.009 }' 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:10:10.009 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:10.576 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:10.576 "name": "pt3", 00:10:10.576 "aliases": [ 00:10:10.576 "00000000-0000-0000-0000-000000000003" 00:10:10.576 ], 00:10:10.576 "product_name": "passthru", 00:10:10.576 "block_size": 512, 00:10:10.576 "num_blocks": 65536, 00:10:10.576 "uuid": "00000000-0000-0000-0000-000000000003", 00:10:10.576 "assigned_rate_limits": { 00:10:10.576 "rw_ios_per_sec": 0, 00:10:10.576 "rw_mbytes_per_sec": 0, 00:10:10.576 "r_mbytes_per_sec": 0, 00:10:10.576 "w_mbytes_per_sec": 0 00:10:10.576 }, 00:10:10.576 "claimed": true, 00:10:10.576 "claim_type": "exclusive_write", 00:10:10.576 "zoned": false, 00:10:10.576 "supported_io_types": { 00:10:10.576 "read": true, 00:10:10.576 "write": true, 00:10:10.576 "unmap": true, 00:10:10.576 "flush": true, 00:10:10.576 "reset": true, 00:10:10.576 "nvme_admin": false, 00:10:10.576 "nvme_io": false, 00:10:10.576 "nvme_io_md": false, 00:10:10.576 "write_zeroes": true, 00:10:10.576 "zcopy": true, 00:10:10.576 "get_zone_info": false, 00:10:10.576 "zone_management": false, 00:10:10.576 "zone_append": false, 00:10:10.576 "compare": false, 00:10:10.576 "compare_and_write": false, 00:10:10.576 "abort": true, 00:10:10.576 "seek_hole": false, 00:10:10.576 "seek_data": false, 00:10:10.576 "copy": true, 00:10:10.576 "nvme_iov_md": false 00:10:10.576 }, 00:10:10.576 "memory_domains": [ 00:10:10.576 { 00:10:10.576 "dma_device_id": "system", 00:10:10.576 "dma_device_type": 1 00:10:10.576 }, 00:10:10.576 { 00:10:10.576 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:10.576 "dma_device_type": 2 00:10:10.576 } 00:10:10.576 ], 00:10:10.576 "driver_specific": { 00:10:10.576 "passthru": { 00:10:10.576 "name": "pt3", 00:10:10.576 "base_bdev_name": "malloc3" 00:10:10.576 } 00:10:10.576 } 00:10:10.576 }' 00:10:10.576 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:10.576 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:10.576 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:10.576 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:10.576 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:10.576 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:10.576 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:10.576 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:10.576 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:10.576 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:10.576 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:10.576 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:10.576 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:10:10.576 21:09:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:10:10.576 [2024-07-14 21:09:22.096659] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:10.576 21:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 52679bdf-4225-11ef-aa83-81fbc7dfef58 '!=' 52679bdf-4225-11ef-aa83-81fbc7dfef58 ']' 00:10:10.576 21:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:10:10.576 21:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:10.576 21:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:10.576 21:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 53376 00:10:10.576 21:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 53376 ']' 00:10:10.576 21:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 53376 00:10:10.576 21:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:10:10.576 21:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:10.576 21:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 53376 00:10:10.576 21:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:10:10.836 21:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:10.836 killing process with pid 53376 00:10:10.836 21:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:10.836 21:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53376' 00:10:10.836 21:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 53376 00:10:10.836 [2024-07-14 21:09:22.126545] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:10.836 21:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 53376 00:10:10.836 [2024-07-14 21:09:22.126567] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:10.836 [2024-07-14 21:09:22.126581] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:10.836 [2024-07-14 21:09:22.126585] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x139e8e834780 name raid_bdev1, state offline 00:10:10.836 [2024-07-14 21:09:22.146306] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:10.836 21:09:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:10:10.836 00:10:10.836 real 0m11.463s 00:10:10.836 user 0m20.335s 00:10:10.836 sys 0m1.840s 00:10:10.836 ************************************ 00:10:10.836 END TEST raid_superblock_test 00:10:10.836 ************************************ 00:10:10.836 21:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:10.836 21:09:22 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:10:10.836 21:09:22 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:10.836 21:09:22 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 3 read 00:10:10.836 21:09:22 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:10.836 21:09:22 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:10.836 21:09:22 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:11.095 ************************************ 00:10:11.095 START TEST raid_read_error_test 00:10:11.095 ************************************ 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 read 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.dUWoNw7ioH 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53727 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53727 /var/tmp/spdk-raid.sock 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 53727 ']' 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:11.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:11.095 21:09:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:11.095 [2024-07-14 21:09:22.404073] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:11.095 [2024-07-14 21:09:22.404313] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:11.661 EAL: TSC is not safe to use in SMP mode 00:10:11.661 EAL: TSC is not invariant 00:10:11.661 [2024-07-14 21:09:22.987470] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:11.661 [2024-07-14 21:09:23.080865] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:11.661 [2024-07-14 21:09:23.083458] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:11.661 [2024-07-14 21:09:23.084340] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:11.661 [2024-07-14 21:09:23.084355] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:12.226 21:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:12.226 21:09:23 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:12.226 21:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:12.226 21:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:12.484 BaseBdev1_malloc 00:10:12.484 21:09:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:12.742 true 00:10:12.742 21:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:13.000 [2024-07-14 21:09:24.325525] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:13.000 [2024-07-14 21:09:24.325586] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.000 [2024-07-14 21:09:24.325646] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf276fa34780 00:10:13.000 [2024-07-14 21:09:24.325680] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.000 [2024-07-14 21:09:24.326345] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.000 [2024-07-14 21:09:24.326372] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:13.000 BaseBdev1 00:10:13.000 21:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:13.000 21:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:13.000 BaseBdev2_malloc 00:10:13.258 21:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:13.258 true 00:10:13.258 21:09:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:13.517 [2024-07-14 21:09:25.001604] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:13.517 [2024-07-14 21:09:25.001666] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:13.517 [2024-07-14 21:09:25.001705] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf276fa34c80 00:10:13.517 [2024-07-14 21:09:25.001713] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:13.517 [2024-07-14 21:09:25.002400] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:13.517 [2024-07-14 21:09:25.002425] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:13.517 BaseBdev2 00:10:13.517 21:09:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:13.517 21:09:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:13.775 BaseBdev3_malloc 00:10:13.776 21:09:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:10:14.034 true 00:10:14.034 21:09:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:14.293 [2024-07-14 21:09:25.781598] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:14.293 [2024-07-14 21:09:25.781660] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:14.293 [2024-07-14 21:09:25.781711] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xf276fa35180 00:10:14.293 [2024-07-14 21:09:25.781718] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:14.293 [2024-07-14 21:09:25.782476] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:14.293 [2024-07-14 21:09:25.782502] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:14.293 BaseBdev3 00:10:14.293 21:09:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:10:14.552 [2024-07-14 21:09:26.009607] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:14.552 [2024-07-14 21:09:26.010274] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:14.552 [2024-07-14 21:09:26.010314] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:14.552 [2024-07-14 21:09:26.010383] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xf276fa35400 00:10:14.552 [2024-07-14 21:09:26.010389] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:14.552 [2024-07-14 21:09:26.010423] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf276faa0e20 00:10:14.552 [2024-07-14 21:09:26.010505] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xf276fa35400 00:10:14.552 [2024-07-14 21:09:26.010510] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xf276fa35400 00:10:14.552 [2024-07-14 21:09:26.010536] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:14.552 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:14.552 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:14.552 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:14.552 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:14.552 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:14.552 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:14.552 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:14.552 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:14.552 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:14.552 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:14.552 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:14.552 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:14.811 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:14.811 "name": "raid_bdev1", 00:10:14.811 "uuid": "59d47e04-4225-11ef-aa83-81fbc7dfef58", 00:10:14.811 "strip_size_kb": 64, 00:10:14.811 "state": "online", 00:10:14.811 "raid_level": "raid0", 00:10:14.811 "superblock": true, 00:10:14.811 "num_base_bdevs": 3, 00:10:14.811 "num_base_bdevs_discovered": 3, 00:10:14.811 "num_base_bdevs_operational": 3, 00:10:14.811 "base_bdevs_list": [ 00:10:14.811 { 00:10:14.811 "name": "BaseBdev1", 00:10:14.811 "uuid": "bf56c870-78b3-a95c-a488-fadf87cd8e27", 00:10:14.811 "is_configured": true, 00:10:14.811 "data_offset": 2048, 00:10:14.811 "data_size": 63488 00:10:14.811 }, 00:10:14.811 { 00:10:14.811 "name": "BaseBdev2", 00:10:14.811 "uuid": "cdf9ce1d-843c-bc59-aa3f-ce47fc346bd1", 00:10:14.811 "is_configured": true, 00:10:14.811 "data_offset": 2048, 00:10:14.811 "data_size": 63488 00:10:14.811 }, 00:10:14.811 { 00:10:14.811 "name": "BaseBdev3", 00:10:14.811 "uuid": "be8abe90-dfbe-d95a-93c0-9474dcec5dc1", 00:10:14.811 "is_configured": true, 00:10:14.811 "data_offset": 2048, 00:10:14.811 "data_size": 63488 00:10:14.811 } 00:10:14.811 ] 00:10:14.811 }' 00:10:14.811 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:14.811 21:09:26 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:15.070 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:15.070 21:09:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:15.329 [2024-07-14 21:09:26.689858] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xf276faa0ec0 00:10:16.267 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:10:16.526 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:16.526 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:16.526 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:16.526 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:16.526 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:16.526 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:16.526 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:16.526 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:16.526 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:16.526 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:16.526 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:16.526 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:16.526 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:16.526 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:16.526 21:09:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:16.786 21:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:16.786 "name": "raid_bdev1", 00:10:16.786 "uuid": "59d47e04-4225-11ef-aa83-81fbc7dfef58", 00:10:16.786 "strip_size_kb": 64, 00:10:16.786 "state": "online", 00:10:16.786 "raid_level": "raid0", 00:10:16.786 "superblock": true, 00:10:16.786 "num_base_bdevs": 3, 00:10:16.786 "num_base_bdevs_discovered": 3, 00:10:16.786 "num_base_bdevs_operational": 3, 00:10:16.786 "base_bdevs_list": [ 00:10:16.786 { 00:10:16.786 "name": "BaseBdev1", 00:10:16.786 "uuid": "bf56c870-78b3-a95c-a488-fadf87cd8e27", 00:10:16.786 "is_configured": true, 00:10:16.786 "data_offset": 2048, 00:10:16.786 "data_size": 63488 00:10:16.786 }, 00:10:16.786 { 00:10:16.786 "name": "BaseBdev2", 00:10:16.786 "uuid": "cdf9ce1d-843c-bc59-aa3f-ce47fc346bd1", 00:10:16.786 "is_configured": true, 00:10:16.786 "data_offset": 2048, 00:10:16.786 "data_size": 63488 00:10:16.786 }, 00:10:16.786 { 00:10:16.786 "name": "BaseBdev3", 00:10:16.786 "uuid": "be8abe90-dfbe-d95a-93c0-9474dcec5dc1", 00:10:16.786 "is_configured": true, 00:10:16.786 "data_offset": 2048, 00:10:16.786 "data_size": 63488 00:10:16.786 } 00:10:16.786 ] 00:10:16.786 }' 00:10:16.786 21:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:16.786 21:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.045 21:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:17.309 [2024-07-14 21:09:28.688050] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:17.309 [2024-07-14 21:09:28.688075] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:17.309 [2024-07-14 21:09:28.688404] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:17.309 [2024-07-14 21:09:28.688414] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:17.309 [2024-07-14 21:09:28.688420] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:17.309 [2024-07-14 21:09:28.688424] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xf276fa35400 name raid_bdev1, state offline 00:10:17.309 0 00:10:17.309 21:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53727 00:10:17.309 21:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 53727 ']' 00:10:17.309 21:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 53727 00:10:17.309 21:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:10:17.309 21:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:17.309 21:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 53727 00:10:17.309 21:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:10:17.309 21:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:10:17.309 killing process with pid 53727 00:10:17.309 21:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:10:17.309 21:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53727' 00:10:17.309 21:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 53727 00:10:17.309 [2024-07-14 21:09:28.718569] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:17.309 21:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 53727 00:10:17.309 [2024-07-14 21:09:28.736811] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:17.578 21:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.dUWoNw7ioH 00:10:17.578 21:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:17.578 21:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:17.578 21:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:10:17.578 21:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:17.578 21:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:17.578 21:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:17.578 21:09:28 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:10:17.578 00:10:17.578 real 0m6.542s 00:10:17.578 user 0m10.271s 00:10:17.578 sys 0m1.091s 00:10:17.578 21:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:17.578 21:09:28 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.578 ************************************ 00:10:17.578 END TEST raid_read_error_test 00:10:17.578 ************************************ 00:10:17.578 21:09:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:17.578 21:09:28 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 3 write 00:10:17.578 21:09:28 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:17.578 21:09:28 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:17.578 21:09:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:17.578 ************************************ 00:10:17.578 START TEST raid_write_error_test 00:10:17.578 ************************************ 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 3 write 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.2VylYpBedC 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=53858 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 53858 /var/tmp/spdk-raid.sock 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 53858 ']' 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:17.578 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:17.578 21:09:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:10:17.578 [2024-07-14 21:09:28.996227] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:17.578 [2024-07-14 21:09:28.996361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:18.147 EAL: TSC is not safe to use in SMP mode 00:10:18.147 EAL: TSC is not invariant 00:10:18.147 [2024-07-14 21:09:29.573943] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:18.147 [2024-07-14 21:09:29.671578] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:18.147 [2024-07-14 21:09:29.674146] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:18.147 [2024-07-14 21:09:29.674982] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.147 [2024-07-14 21:09:29.674997] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:18.715 21:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:18.715 21:09:30 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:10:18.715 21:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:18.715 21:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:10:18.974 BaseBdev1_malloc 00:10:18.974 21:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:10:19.232 true 00:10:19.232 21:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:10:19.491 [2024-07-14 21:09:30.839959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:10:19.491 [2024-07-14 21:09:30.840056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:19.491 [2024-07-14 21:09:30.840091] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3bfcd6234780 00:10:19.491 [2024-07-14 21:09:30.840099] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:19.491 [2024-07-14 21:09:30.840720] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:19.491 [2024-07-14 21:09:30.840760] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:10:19.491 BaseBdev1 00:10:19.491 21:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:19.491 21:09:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:10:19.750 BaseBdev2_malloc 00:10:19.750 21:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:10:20.008 true 00:10:20.008 21:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:10:20.008 [2024-07-14 21:09:31.547959] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:10:20.008 [2024-07-14 21:09:31.548014] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:20.008 [2024-07-14 21:09:31.548049] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3bfcd6234c80 00:10:20.008 [2024-07-14 21:09:31.548056] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:20.008 [2024-07-14 21:09:31.548694] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:20.008 [2024-07-14 21:09:31.548720] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:10:20.008 BaseBdev2 00:10:20.267 21:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:10:20.267 21:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:10:20.525 BaseBdev3_malloc 00:10:20.525 21:09:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:10:20.784 true 00:10:20.784 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:10:21.043 [2024-07-14 21:09:32.340066] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:10:21.043 [2024-07-14 21:09:32.340160] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:10:21.043 [2024-07-14 21:09:32.340185] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3bfcd6235180 00:10:21.043 [2024-07-14 21:09:32.340194] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:10:21.043 [2024-07-14 21:09:32.340851] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:10:21.043 [2024-07-14 21:09:32.340885] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:10:21.043 BaseBdev3 00:10:21.043 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:10:21.043 [2024-07-14 21:09:32.564059] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:21.043 [2024-07-14 21:09:32.564699] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:21.043 [2024-07-14 21:09:32.564725] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:21.043 [2024-07-14 21:09:32.564800] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3bfcd6235400 00:10:21.043 [2024-07-14 21:09:32.564818] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:21.043 [2024-07-14 21:09:32.564892] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3bfcd62a0e20 00:10:21.043 [2024-07-14 21:09:32.565008] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3bfcd6235400 00:10:21.043 [2024-07-14 21:09:32.565013] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3bfcd6235400 00:10:21.043 [2024-07-14 21:09:32.565044] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:21.043 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:21.043 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:21.043 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:21.043 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:21.043 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:21.043 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:21.043 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:21.043 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:21.043 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:21.043 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:21.043 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:21.043 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:21.302 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:21.302 "name": "raid_bdev1", 00:10:21.302 "uuid": "5dbc9f54-4225-11ef-aa83-81fbc7dfef58", 00:10:21.302 "strip_size_kb": 64, 00:10:21.302 "state": "online", 00:10:21.302 "raid_level": "raid0", 00:10:21.302 "superblock": true, 00:10:21.302 "num_base_bdevs": 3, 00:10:21.302 "num_base_bdevs_discovered": 3, 00:10:21.302 "num_base_bdevs_operational": 3, 00:10:21.302 "base_bdevs_list": [ 00:10:21.302 { 00:10:21.302 "name": "BaseBdev1", 00:10:21.302 "uuid": "a71e8d48-eac3-575e-8dd8-842f85345926", 00:10:21.302 "is_configured": true, 00:10:21.302 "data_offset": 2048, 00:10:21.302 "data_size": 63488 00:10:21.302 }, 00:10:21.302 { 00:10:21.302 "name": "BaseBdev2", 00:10:21.302 "uuid": "06446a73-7fdb-465b-89bb-a2abd34dd299", 00:10:21.302 "is_configured": true, 00:10:21.302 "data_offset": 2048, 00:10:21.302 "data_size": 63488 00:10:21.302 }, 00:10:21.302 { 00:10:21.302 "name": "BaseBdev3", 00:10:21.302 "uuid": "1a000028-fe44-845a-b312-d14c47a8e53a", 00:10:21.302 "is_configured": true, 00:10:21.302 "data_offset": 2048, 00:10:21.302 "data_size": 63488 00:10:21.302 } 00:10:21.302 ] 00:10:21.302 }' 00:10:21.302 21:09:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:21.302 21:09:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:21.869 21:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:10:21.869 21:09:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:10:21.869 [2024-07-14 21:09:33.244332] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3bfcd62a0ec0 00:10:22.804 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:10:23.062 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:10:23.062 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:10:23.062 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:10:23.062 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 3 00:10:23.062 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:10:23.062 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:23.062 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:10:23.062 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:23.062 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:23.062 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:23.063 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:23.063 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:23.063 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:23.063 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:23.063 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:10:23.322 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:23.322 "name": "raid_bdev1", 00:10:23.322 "uuid": "5dbc9f54-4225-11ef-aa83-81fbc7dfef58", 00:10:23.322 "strip_size_kb": 64, 00:10:23.322 "state": "online", 00:10:23.322 "raid_level": "raid0", 00:10:23.322 "superblock": true, 00:10:23.322 "num_base_bdevs": 3, 00:10:23.322 "num_base_bdevs_discovered": 3, 00:10:23.322 "num_base_bdevs_operational": 3, 00:10:23.322 "base_bdevs_list": [ 00:10:23.322 { 00:10:23.322 "name": "BaseBdev1", 00:10:23.322 "uuid": "a71e8d48-eac3-575e-8dd8-842f85345926", 00:10:23.322 "is_configured": true, 00:10:23.322 "data_offset": 2048, 00:10:23.322 "data_size": 63488 00:10:23.322 }, 00:10:23.322 { 00:10:23.322 "name": "BaseBdev2", 00:10:23.322 "uuid": "06446a73-7fdb-465b-89bb-a2abd34dd299", 00:10:23.322 "is_configured": true, 00:10:23.322 "data_offset": 2048, 00:10:23.322 "data_size": 63488 00:10:23.322 }, 00:10:23.322 { 00:10:23.322 "name": "BaseBdev3", 00:10:23.322 "uuid": "1a000028-fe44-845a-b312-d14c47a8e53a", 00:10:23.322 "is_configured": true, 00:10:23.322 "data_offset": 2048, 00:10:23.322 "data_size": 63488 00:10:23.322 } 00:10:23.322 ] 00:10:23.322 }' 00:10:23.322 21:09:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:23.322 21:09:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:23.581 21:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:10:23.840 [2024-07-14 21:09:35.266287] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:10:23.840 [2024-07-14 21:09:35.266310] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:23.840 [2024-07-14 21:09:35.266643] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:23.840 [2024-07-14 21:09:35.266653] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:23.840 [2024-07-14 21:09:35.266660] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:23.840 [2024-07-14 21:09:35.266663] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3bfcd6235400 name raid_bdev1, state offline 00:10:23.840 0 00:10:23.840 21:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 53858 00:10:23.840 21:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 53858 ']' 00:10:23.840 21:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 53858 00:10:23.840 21:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:10:23.840 21:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:23.840 21:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 53858 00:10:23.840 21:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:10:23.840 21:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:10:23.840 21:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:10:23.840 killing process with pid 53858 00:10:23.840 21:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53858' 00:10:23.840 21:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 53858 00:10:23.840 [2024-07-14 21:09:35.295502] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:23.840 21:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 53858 00:10:23.840 [2024-07-14 21:09:35.313718] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:24.099 21:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.2VylYpBedC 00:10:24.099 21:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:10:24.099 21:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:10:24.099 21:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.49 00:10:24.099 21:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:10:24.099 21:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:24.099 21:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:24.099 21:09:35 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.49 != \0\.\0\0 ]] 00:10:24.099 00:10:24.099 real 0m6.522s 00:10:24.099 user 0m10.128s 00:10:24.099 sys 0m1.159s 00:10:24.099 21:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:24.099 21:09:35 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.099 ************************************ 00:10:24.099 END TEST raid_write_error_test 00:10:24.099 ************************************ 00:10:24.099 21:09:35 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:24.099 21:09:35 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:10:24.099 21:09:35 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 3 false 00:10:24.099 21:09:35 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:24.099 21:09:35 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:24.099 21:09:35 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:24.099 ************************************ 00:10:24.099 START TEST raid_state_function_test 00:10:24.099 ************************************ 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 false 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=53987 00:10:24.099 Process raid pid: 53987 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 53987' 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 53987 /var/tmp/spdk-raid.sock 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 53987 ']' 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:24.099 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:24.099 21:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:24.100 21:09:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:24.100 [2024-07-14 21:09:35.575339] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:24.100 [2024-07-14 21:09:35.575628] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:24.667 EAL: TSC is not safe to use in SMP mode 00:10:24.667 EAL: TSC is not invariant 00:10:24.667 [2024-07-14 21:09:36.147575] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:24.926 [2024-07-14 21:09:36.248891] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:24.926 [2024-07-14 21:09:36.251418] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:24.926 [2024-07-14 21:09:36.252385] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:24.926 [2024-07-14 21:09:36.252398] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:25.185 21:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:25.185 21:09:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:10:25.185 21:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:25.444 [2024-07-14 21:09:36.817698] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:25.444 [2024-07-14 21:09:36.817755] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:25.444 [2024-07-14 21:09:36.817759] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:25.444 [2024-07-14 21:09:36.817783] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:25.444 [2024-07-14 21:09:36.817786] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:25.444 [2024-07-14 21:09:36.817793] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:25.444 21:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:25.444 21:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:25.444 21:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:25.444 21:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:25.444 21:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:25.444 21:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:25.444 21:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:25.444 21:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:25.444 21:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:25.444 21:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:25.444 21:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:25.444 21:09:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:25.703 21:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:25.703 "name": "Existed_Raid", 00:10:25.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.703 "strip_size_kb": 64, 00:10:25.703 "state": "configuring", 00:10:25.703 "raid_level": "concat", 00:10:25.703 "superblock": false, 00:10:25.703 "num_base_bdevs": 3, 00:10:25.703 "num_base_bdevs_discovered": 0, 00:10:25.703 "num_base_bdevs_operational": 3, 00:10:25.703 "base_bdevs_list": [ 00:10:25.703 { 00:10:25.703 "name": "BaseBdev1", 00:10:25.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.703 "is_configured": false, 00:10:25.703 "data_offset": 0, 00:10:25.703 "data_size": 0 00:10:25.703 }, 00:10:25.703 { 00:10:25.703 "name": "BaseBdev2", 00:10:25.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.703 "is_configured": false, 00:10:25.703 "data_offset": 0, 00:10:25.703 "data_size": 0 00:10:25.703 }, 00:10:25.703 { 00:10:25.703 "name": "BaseBdev3", 00:10:25.703 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:25.703 "is_configured": false, 00:10:25.703 "data_offset": 0, 00:10:25.703 "data_size": 0 00:10:25.703 } 00:10:25.703 ] 00:10:25.703 }' 00:10:25.703 21:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:25.703 21:09:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:25.961 21:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:26.220 [2024-07-14 21:09:37.661705] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:26.220 [2024-07-14 21:09:37.661725] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfb487a34500 name Existed_Raid, state configuring 00:10:26.220 21:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:26.478 [2024-07-14 21:09:37.925714] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:26.478 [2024-07-14 21:09:37.925764] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:26.478 [2024-07-14 21:09:37.925768] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:26.478 [2024-07-14 21:09:37.925792] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:26.478 [2024-07-14 21:09:37.925795] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:26.478 [2024-07-14 21:09:37.925813] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:26.478 21:09:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:26.737 [2024-07-14 21:09:38.194630] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:26.737 BaseBdev1 00:10:26.737 21:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:26.737 21:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:26.737 21:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:26.737 21:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:26.737 21:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:26.737 21:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:26.737 21:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:27.007 21:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:27.278 [ 00:10:27.278 { 00:10:27.278 "name": "BaseBdev1", 00:10:27.278 "aliases": [ 00:10:27.278 "6117a4be-4225-11ef-aa83-81fbc7dfef58" 00:10:27.278 ], 00:10:27.278 "product_name": "Malloc disk", 00:10:27.278 "block_size": 512, 00:10:27.278 "num_blocks": 65536, 00:10:27.278 "uuid": "6117a4be-4225-11ef-aa83-81fbc7dfef58", 00:10:27.278 "assigned_rate_limits": { 00:10:27.278 "rw_ios_per_sec": 0, 00:10:27.278 "rw_mbytes_per_sec": 0, 00:10:27.278 "r_mbytes_per_sec": 0, 00:10:27.278 "w_mbytes_per_sec": 0 00:10:27.278 }, 00:10:27.278 "claimed": true, 00:10:27.278 "claim_type": "exclusive_write", 00:10:27.278 "zoned": false, 00:10:27.278 "supported_io_types": { 00:10:27.278 "read": true, 00:10:27.278 "write": true, 00:10:27.278 "unmap": true, 00:10:27.278 "flush": true, 00:10:27.278 "reset": true, 00:10:27.278 "nvme_admin": false, 00:10:27.278 "nvme_io": false, 00:10:27.278 "nvme_io_md": false, 00:10:27.278 "write_zeroes": true, 00:10:27.278 "zcopy": true, 00:10:27.278 "get_zone_info": false, 00:10:27.278 "zone_management": false, 00:10:27.278 "zone_append": false, 00:10:27.278 "compare": false, 00:10:27.278 "compare_and_write": false, 00:10:27.278 "abort": true, 00:10:27.278 "seek_hole": false, 00:10:27.278 "seek_data": false, 00:10:27.278 "copy": true, 00:10:27.278 "nvme_iov_md": false 00:10:27.278 }, 00:10:27.278 "memory_domains": [ 00:10:27.278 { 00:10:27.278 "dma_device_id": "system", 00:10:27.278 "dma_device_type": 1 00:10:27.278 }, 00:10:27.278 { 00:10:27.278 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:27.278 "dma_device_type": 2 00:10:27.278 } 00:10:27.278 ], 00:10:27.278 "driver_specific": {} 00:10:27.278 } 00:10:27.278 ] 00:10:27.278 21:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:27.278 21:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:27.278 21:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:27.279 21:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:27.279 21:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:27.279 21:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:27.279 21:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:27.279 21:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:27.279 21:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:27.279 21:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:27.279 21:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:27.279 21:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:27.279 21:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:27.537 21:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:27.537 "name": "Existed_Raid", 00:10:27.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.537 "strip_size_kb": 64, 00:10:27.537 "state": "configuring", 00:10:27.537 "raid_level": "concat", 00:10:27.537 "superblock": false, 00:10:27.537 "num_base_bdevs": 3, 00:10:27.537 "num_base_bdevs_discovered": 1, 00:10:27.537 "num_base_bdevs_operational": 3, 00:10:27.537 "base_bdevs_list": [ 00:10:27.537 { 00:10:27.537 "name": "BaseBdev1", 00:10:27.537 "uuid": "6117a4be-4225-11ef-aa83-81fbc7dfef58", 00:10:27.537 "is_configured": true, 00:10:27.537 "data_offset": 0, 00:10:27.537 "data_size": 65536 00:10:27.537 }, 00:10:27.537 { 00:10:27.537 "name": "BaseBdev2", 00:10:27.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.537 "is_configured": false, 00:10:27.537 "data_offset": 0, 00:10:27.537 "data_size": 0 00:10:27.537 }, 00:10:27.537 { 00:10:27.537 "name": "BaseBdev3", 00:10:27.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:27.537 "is_configured": false, 00:10:27.537 "data_offset": 0, 00:10:27.537 "data_size": 0 00:10:27.537 } 00:10:27.537 ] 00:10:27.537 }' 00:10:27.537 21:09:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:27.537 21:09:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:27.796 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:28.054 [2024-07-14 21:09:39.441796] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:28.054 [2024-07-14 21:09:39.441836] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfb487a34500 name Existed_Raid, state configuring 00:10:28.054 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:28.313 [2024-07-14 21:09:39.649798] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:28.313 [2024-07-14 21:09:39.650683] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:28.313 [2024-07-14 21:09:39.650728] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:28.313 [2024-07-14 21:09:39.650733] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:28.313 [2024-07-14 21:09:39.650756] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:28.313 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:28.313 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:28.313 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:28.313 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:28.313 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:28.313 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:28.313 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:28.313 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:28.313 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:28.313 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:28.313 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:28.313 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:28.313 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:28.313 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:28.572 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:28.572 "name": "Existed_Raid", 00:10:28.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.572 "strip_size_kb": 64, 00:10:28.572 "state": "configuring", 00:10:28.572 "raid_level": "concat", 00:10:28.572 "superblock": false, 00:10:28.572 "num_base_bdevs": 3, 00:10:28.572 "num_base_bdevs_discovered": 1, 00:10:28.572 "num_base_bdevs_operational": 3, 00:10:28.572 "base_bdevs_list": [ 00:10:28.572 { 00:10:28.572 "name": "BaseBdev1", 00:10:28.572 "uuid": "6117a4be-4225-11ef-aa83-81fbc7dfef58", 00:10:28.572 "is_configured": true, 00:10:28.572 "data_offset": 0, 00:10:28.572 "data_size": 65536 00:10:28.572 }, 00:10:28.572 { 00:10:28.572 "name": "BaseBdev2", 00:10:28.572 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.572 "is_configured": false, 00:10:28.572 "data_offset": 0, 00:10:28.572 "data_size": 0 00:10:28.573 }, 00:10:28.573 { 00:10:28.573 "name": "BaseBdev3", 00:10:28.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:28.573 "is_configured": false, 00:10:28.573 "data_offset": 0, 00:10:28.573 "data_size": 0 00:10:28.573 } 00:10:28.573 ] 00:10:28.573 }' 00:10:28.573 21:09:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:28.573 21:09:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:28.831 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:29.089 [2024-07-14 21:09:40.469935] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:29.089 BaseBdev2 00:10:29.089 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:29.089 21:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:29.089 21:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:29.089 21:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:29.089 21:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:29.089 21:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:29.089 21:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:29.348 21:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:29.607 [ 00:10:29.607 { 00:10:29.607 "name": "BaseBdev2", 00:10:29.607 "aliases": [ 00:10:29.607 "6272f219-4225-11ef-aa83-81fbc7dfef58" 00:10:29.607 ], 00:10:29.607 "product_name": "Malloc disk", 00:10:29.607 "block_size": 512, 00:10:29.607 "num_blocks": 65536, 00:10:29.607 "uuid": "6272f219-4225-11ef-aa83-81fbc7dfef58", 00:10:29.607 "assigned_rate_limits": { 00:10:29.607 "rw_ios_per_sec": 0, 00:10:29.607 "rw_mbytes_per_sec": 0, 00:10:29.607 "r_mbytes_per_sec": 0, 00:10:29.607 "w_mbytes_per_sec": 0 00:10:29.607 }, 00:10:29.607 "claimed": true, 00:10:29.607 "claim_type": "exclusive_write", 00:10:29.607 "zoned": false, 00:10:29.607 "supported_io_types": { 00:10:29.607 "read": true, 00:10:29.607 "write": true, 00:10:29.607 "unmap": true, 00:10:29.607 "flush": true, 00:10:29.607 "reset": true, 00:10:29.607 "nvme_admin": false, 00:10:29.607 "nvme_io": false, 00:10:29.607 "nvme_io_md": false, 00:10:29.607 "write_zeroes": true, 00:10:29.607 "zcopy": true, 00:10:29.607 "get_zone_info": false, 00:10:29.607 "zone_management": false, 00:10:29.607 "zone_append": false, 00:10:29.607 "compare": false, 00:10:29.607 "compare_and_write": false, 00:10:29.607 "abort": true, 00:10:29.607 "seek_hole": false, 00:10:29.607 "seek_data": false, 00:10:29.607 "copy": true, 00:10:29.607 "nvme_iov_md": false 00:10:29.607 }, 00:10:29.607 "memory_domains": [ 00:10:29.607 { 00:10:29.607 "dma_device_id": "system", 00:10:29.607 "dma_device_type": 1 00:10:29.607 }, 00:10:29.607 { 00:10:29.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:29.607 "dma_device_type": 2 00:10:29.607 } 00:10:29.607 ], 00:10:29.607 "driver_specific": {} 00:10:29.607 } 00:10:29.607 ] 00:10:29.607 21:09:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:29.607 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:29.607 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:29.607 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:29.607 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:29.607 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:29.607 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:29.607 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:29.607 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:29.607 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:29.607 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:29.607 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:29.607 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:29.607 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:29.607 21:09:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:29.865 21:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:29.865 "name": "Existed_Raid", 00:10:29.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.865 "strip_size_kb": 64, 00:10:29.865 "state": "configuring", 00:10:29.865 "raid_level": "concat", 00:10:29.865 "superblock": false, 00:10:29.865 "num_base_bdevs": 3, 00:10:29.865 "num_base_bdevs_discovered": 2, 00:10:29.865 "num_base_bdevs_operational": 3, 00:10:29.865 "base_bdevs_list": [ 00:10:29.865 { 00:10:29.865 "name": "BaseBdev1", 00:10:29.865 "uuid": "6117a4be-4225-11ef-aa83-81fbc7dfef58", 00:10:29.865 "is_configured": true, 00:10:29.865 "data_offset": 0, 00:10:29.865 "data_size": 65536 00:10:29.865 }, 00:10:29.865 { 00:10:29.865 "name": "BaseBdev2", 00:10:29.865 "uuid": "6272f219-4225-11ef-aa83-81fbc7dfef58", 00:10:29.865 "is_configured": true, 00:10:29.865 "data_offset": 0, 00:10:29.865 "data_size": 65536 00:10:29.865 }, 00:10:29.865 { 00:10:29.865 "name": "BaseBdev3", 00:10:29.865 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:29.865 "is_configured": false, 00:10:29.865 "data_offset": 0, 00:10:29.865 "data_size": 0 00:10:29.865 } 00:10:29.865 ] 00:10:29.865 }' 00:10:29.865 21:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:29.865 21:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:30.124 21:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:30.383 [2024-07-14 21:09:41.802010] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:30.383 [2024-07-14 21:09:41.802031] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xfb487a34a00 00:10:30.383 [2024-07-14 21:09:41.802051] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:30.383 [2024-07-14 21:09:41.802070] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfb487a97e20 00:10:30.383 [2024-07-14 21:09:41.802152] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfb487a34a00 00:10:30.383 [2024-07-14 21:09:41.802156] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xfb487a34a00 00:10:30.383 [2024-07-14 21:09:41.802186] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:30.383 BaseBdev3 00:10:30.383 21:09:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:30.383 21:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:30.383 21:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:30.383 21:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:30.383 21:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:30.383 21:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:30.383 21:09:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:30.642 21:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:30.900 [ 00:10:30.900 { 00:10:30.900 "name": "BaseBdev3", 00:10:30.900 "aliases": [ 00:10:30.901 "633e34c5-4225-11ef-aa83-81fbc7dfef58" 00:10:30.901 ], 00:10:30.901 "product_name": "Malloc disk", 00:10:30.901 "block_size": 512, 00:10:30.901 "num_blocks": 65536, 00:10:30.901 "uuid": "633e34c5-4225-11ef-aa83-81fbc7dfef58", 00:10:30.901 "assigned_rate_limits": { 00:10:30.901 "rw_ios_per_sec": 0, 00:10:30.901 "rw_mbytes_per_sec": 0, 00:10:30.901 "r_mbytes_per_sec": 0, 00:10:30.901 "w_mbytes_per_sec": 0 00:10:30.901 }, 00:10:30.901 "claimed": true, 00:10:30.901 "claim_type": "exclusive_write", 00:10:30.901 "zoned": false, 00:10:30.901 "supported_io_types": { 00:10:30.901 "read": true, 00:10:30.901 "write": true, 00:10:30.901 "unmap": true, 00:10:30.901 "flush": true, 00:10:30.901 "reset": true, 00:10:30.901 "nvme_admin": false, 00:10:30.901 "nvme_io": false, 00:10:30.901 "nvme_io_md": false, 00:10:30.901 "write_zeroes": true, 00:10:30.901 "zcopy": true, 00:10:30.901 "get_zone_info": false, 00:10:30.901 "zone_management": false, 00:10:30.901 "zone_append": false, 00:10:30.901 "compare": false, 00:10:30.901 "compare_and_write": false, 00:10:30.901 "abort": true, 00:10:30.901 "seek_hole": false, 00:10:30.901 "seek_data": false, 00:10:30.901 "copy": true, 00:10:30.901 "nvme_iov_md": false 00:10:30.901 }, 00:10:30.901 "memory_domains": [ 00:10:30.901 { 00:10:30.901 "dma_device_id": "system", 00:10:30.901 "dma_device_type": 1 00:10:30.901 }, 00:10:30.901 { 00:10:30.901 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:30.901 "dma_device_type": 2 00:10:30.901 } 00:10:30.901 ], 00:10:30.901 "driver_specific": {} 00:10:30.901 } 00:10:30.901 ] 00:10:30.901 21:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:30.901 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:30.901 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:30.901 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:30.901 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:30.901 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:30.901 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:30.901 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:30.901 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:30.901 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:30.901 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:30.901 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:30.901 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:30.901 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:30.901 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:31.159 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:31.159 "name": "Existed_Raid", 00:10:31.159 "uuid": "633e3acb-4225-11ef-aa83-81fbc7dfef58", 00:10:31.159 "strip_size_kb": 64, 00:10:31.159 "state": "online", 00:10:31.159 "raid_level": "concat", 00:10:31.159 "superblock": false, 00:10:31.159 "num_base_bdevs": 3, 00:10:31.159 "num_base_bdevs_discovered": 3, 00:10:31.159 "num_base_bdevs_operational": 3, 00:10:31.159 "base_bdevs_list": [ 00:10:31.159 { 00:10:31.159 "name": "BaseBdev1", 00:10:31.159 "uuid": "6117a4be-4225-11ef-aa83-81fbc7dfef58", 00:10:31.159 "is_configured": true, 00:10:31.159 "data_offset": 0, 00:10:31.159 "data_size": 65536 00:10:31.159 }, 00:10:31.159 { 00:10:31.159 "name": "BaseBdev2", 00:10:31.159 "uuid": "6272f219-4225-11ef-aa83-81fbc7dfef58", 00:10:31.159 "is_configured": true, 00:10:31.159 "data_offset": 0, 00:10:31.159 "data_size": 65536 00:10:31.159 }, 00:10:31.159 { 00:10:31.159 "name": "BaseBdev3", 00:10:31.159 "uuid": "633e34c5-4225-11ef-aa83-81fbc7dfef58", 00:10:31.159 "is_configured": true, 00:10:31.159 "data_offset": 0, 00:10:31.159 "data_size": 65536 00:10:31.159 } 00:10:31.159 ] 00:10:31.159 }' 00:10:31.159 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:31.159 21:09:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:31.416 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:31.416 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:31.417 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:31.417 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:31.417 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:31.417 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:31.417 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:31.417 21:09:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:31.675 [2024-07-14 21:09:43.001965] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:31.675 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:31.675 "name": "Existed_Raid", 00:10:31.675 "aliases": [ 00:10:31.675 "633e3acb-4225-11ef-aa83-81fbc7dfef58" 00:10:31.675 ], 00:10:31.675 "product_name": "Raid Volume", 00:10:31.675 "block_size": 512, 00:10:31.675 "num_blocks": 196608, 00:10:31.675 "uuid": "633e3acb-4225-11ef-aa83-81fbc7dfef58", 00:10:31.675 "assigned_rate_limits": { 00:10:31.675 "rw_ios_per_sec": 0, 00:10:31.675 "rw_mbytes_per_sec": 0, 00:10:31.675 "r_mbytes_per_sec": 0, 00:10:31.675 "w_mbytes_per_sec": 0 00:10:31.675 }, 00:10:31.675 "claimed": false, 00:10:31.675 "zoned": false, 00:10:31.675 "supported_io_types": { 00:10:31.675 "read": true, 00:10:31.675 "write": true, 00:10:31.675 "unmap": true, 00:10:31.675 "flush": true, 00:10:31.675 "reset": true, 00:10:31.675 "nvme_admin": false, 00:10:31.675 "nvme_io": false, 00:10:31.675 "nvme_io_md": false, 00:10:31.675 "write_zeroes": true, 00:10:31.675 "zcopy": false, 00:10:31.675 "get_zone_info": false, 00:10:31.675 "zone_management": false, 00:10:31.675 "zone_append": false, 00:10:31.675 "compare": false, 00:10:31.675 "compare_and_write": false, 00:10:31.675 "abort": false, 00:10:31.675 "seek_hole": false, 00:10:31.675 "seek_data": false, 00:10:31.675 "copy": false, 00:10:31.675 "nvme_iov_md": false 00:10:31.675 }, 00:10:31.675 "memory_domains": [ 00:10:31.675 { 00:10:31.675 "dma_device_id": "system", 00:10:31.675 "dma_device_type": 1 00:10:31.675 }, 00:10:31.675 { 00:10:31.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.675 "dma_device_type": 2 00:10:31.675 }, 00:10:31.675 { 00:10:31.675 "dma_device_id": "system", 00:10:31.675 "dma_device_type": 1 00:10:31.675 }, 00:10:31.675 { 00:10:31.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.675 "dma_device_type": 2 00:10:31.675 }, 00:10:31.675 { 00:10:31.675 "dma_device_id": "system", 00:10:31.675 "dma_device_type": 1 00:10:31.675 }, 00:10:31.675 { 00:10:31.675 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.675 "dma_device_type": 2 00:10:31.675 } 00:10:31.675 ], 00:10:31.675 "driver_specific": { 00:10:31.675 "raid": { 00:10:31.675 "uuid": "633e3acb-4225-11ef-aa83-81fbc7dfef58", 00:10:31.675 "strip_size_kb": 64, 00:10:31.675 "state": "online", 00:10:31.675 "raid_level": "concat", 00:10:31.675 "superblock": false, 00:10:31.675 "num_base_bdevs": 3, 00:10:31.675 "num_base_bdevs_discovered": 3, 00:10:31.675 "num_base_bdevs_operational": 3, 00:10:31.675 "base_bdevs_list": [ 00:10:31.675 { 00:10:31.675 "name": "BaseBdev1", 00:10:31.675 "uuid": "6117a4be-4225-11ef-aa83-81fbc7dfef58", 00:10:31.675 "is_configured": true, 00:10:31.675 "data_offset": 0, 00:10:31.675 "data_size": 65536 00:10:31.675 }, 00:10:31.675 { 00:10:31.675 "name": "BaseBdev2", 00:10:31.675 "uuid": "6272f219-4225-11ef-aa83-81fbc7dfef58", 00:10:31.675 "is_configured": true, 00:10:31.675 "data_offset": 0, 00:10:31.675 "data_size": 65536 00:10:31.675 }, 00:10:31.675 { 00:10:31.675 "name": "BaseBdev3", 00:10:31.675 "uuid": "633e34c5-4225-11ef-aa83-81fbc7dfef58", 00:10:31.675 "is_configured": true, 00:10:31.675 "data_offset": 0, 00:10:31.675 "data_size": 65536 00:10:31.675 } 00:10:31.675 ] 00:10:31.675 } 00:10:31.675 } 00:10:31.675 }' 00:10:31.675 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:31.675 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:31.675 BaseBdev2 00:10:31.675 BaseBdev3' 00:10:31.675 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:31.675 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:31.675 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:31.933 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:31.933 "name": "BaseBdev1", 00:10:31.933 "aliases": [ 00:10:31.933 "6117a4be-4225-11ef-aa83-81fbc7dfef58" 00:10:31.933 ], 00:10:31.933 "product_name": "Malloc disk", 00:10:31.933 "block_size": 512, 00:10:31.933 "num_blocks": 65536, 00:10:31.933 "uuid": "6117a4be-4225-11ef-aa83-81fbc7dfef58", 00:10:31.933 "assigned_rate_limits": { 00:10:31.933 "rw_ios_per_sec": 0, 00:10:31.933 "rw_mbytes_per_sec": 0, 00:10:31.933 "r_mbytes_per_sec": 0, 00:10:31.933 "w_mbytes_per_sec": 0 00:10:31.933 }, 00:10:31.933 "claimed": true, 00:10:31.933 "claim_type": "exclusive_write", 00:10:31.933 "zoned": false, 00:10:31.933 "supported_io_types": { 00:10:31.933 "read": true, 00:10:31.933 "write": true, 00:10:31.933 "unmap": true, 00:10:31.933 "flush": true, 00:10:31.933 "reset": true, 00:10:31.933 "nvme_admin": false, 00:10:31.933 "nvme_io": false, 00:10:31.933 "nvme_io_md": false, 00:10:31.933 "write_zeroes": true, 00:10:31.933 "zcopy": true, 00:10:31.933 "get_zone_info": false, 00:10:31.933 "zone_management": false, 00:10:31.933 "zone_append": false, 00:10:31.933 "compare": false, 00:10:31.933 "compare_and_write": false, 00:10:31.933 "abort": true, 00:10:31.933 "seek_hole": false, 00:10:31.933 "seek_data": false, 00:10:31.933 "copy": true, 00:10:31.933 "nvme_iov_md": false 00:10:31.933 }, 00:10:31.933 "memory_domains": [ 00:10:31.934 { 00:10:31.934 "dma_device_id": "system", 00:10:31.934 "dma_device_type": 1 00:10:31.934 }, 00:10:31.934 { 00:10:31.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:31.934 "dma_device_type": 2 00:10:31.934 } 00:10:31.934 ], 00:10:31.934 "driver_specific": {} 00:10:31.934 }' 00:10:31.934 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.934 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:31.934 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:31.934 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.934 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:31.934 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:31.934 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.934 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:31.934 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:31.934 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.934 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:31.934 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:31.934 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:31.934 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:31.934 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:32.192 "name": "BaseBdev2", 00:10:32.192 "aliases": [ 00:10:32.192 "6272f219-4225-11ef-aa83-81fbc7dfef58" 00:10:32.192 ], 00:10:32.192 "product_name": "Malloc disk", 00:10:32.192 "block_size": 512, 00:10:32.192 "num_blocks": 65536, 00:10:32.192 "uuid": "6272f219-4225-11ef-aa83-81fbc7dfef58", 00:10:32.192 "assigned_rate_limits": { 00:10:32.192 "rw_ios_per_sec": 0, 00:10:32.192 "rw_mbytes_per_sec": 0, 00:10:32.192 "r_mbytes_per_sec": 0, 00:10:32.192 "w_mbytes_per_sec": 0 00:10:32.192 }, 00:10:32.192 "claimed": true, 00:10:32.192 "claim_type": "exclusive_write", 00:10:32.192 "zoned": false, 00:10:32.192 "supported_io_types": { 00:10:32.192 "read": true, 00:10:32.192 "write": true, 00:10:32.192 "unmap": true, 00:10:32.192 "flush": true, 00:10:32.192 "reset": true, 00:10:32.192 "nvme_admin": false, 00:10:32.192 "nvme_io": false, 00:10:32.192 "nvme_io_md": false, 00:10:32.192 "write_zeroes": true, 00:10:32.192 "zcopy": true, 00:10:32.192 "get_zone_info": false, 00:10:32.192 "zone_management": false, 00:10:32.192 "zone_append": false, 00:10:32.192 "compare": false, 00:10:32.192 "compare_and_write": false, 00:10:32.192 "abort": true, 00:10:32.192 "seek_hole": false, 00:10:32.192 "seek_data": false, 00:10:32.192 "copy": true, 00:10:32.192 "nvme_iov_md": false 00:10:32.192 }, 00:10:32.192 "memory_domains": [ 00:10:32.192 { 00:10:32.192 "dma_device_id": "system", 00:10:32.192 "dma_device_type": 1 00:10:32.192 }, 00:10:32.192 { 00:10:32.192 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.192 "dma_device_type": 2 00:10:32.192 } 00:10:32.192 ], 00:10:32.192 "driver_specific": {} 00:10:32.192 }' 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:32.192 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:32.450 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:32.450 "name": "BaseBdev3", 00:10:32.450 "aliases": [ 00:10:32.450 "633e34c5-4225-11ef-aa83-81fbc7dfef58" 00:10:32.450 ], 00:10:32.450 "product_name": "Malloc disk", 00:10:32.450 "block_size": 512, 00:10:32.450 "num_blocks": 65536, 00:10:32.450 "uuid": "633e34c5-4225-11ef-aa83-81fbc7dfef58", 00:10:32.450 "assigned_rate_limits": { 00:10:32.450 "rw_ios_per_sec": 0, 00:10:32.450 "rw_mbytes_per_sec": 0, 00:10:32.450 "r_mbytes_per_sec": 0, 00:10:32.450 "w_mbytes_per_sec": 0 00:10:32.450 }, 00:10:32.450 "claimed": true, 00:10:32.450 "claim_type": "exclusive_write", 00:10:32.450 "zoned": false, 00:10:32.450 "supported_io_types": { 00:10:32.450 "read": true, 00:10:32.450 "write": true, 00:10:32.450 "unmap": true, 00:10:32.450 "flush": true, 00:10:32.450 "reset": true, 00:10:32.450 "nvme_admin": false, 00:10:32.450 "nvme_io": false, 00:10:32.450 "nvme_io_md": false, 00:10:32.450 "write_zeroes": true, 00:10:32.450 "zcopy": true, 00:10:32.450 "get_zone_info": false, 00:10:32.450 "zone_management": false, 00:10:32.450 "zone_append": false, 00:10:32.450 "compare": false, 00:10:32.450 "compare_and_write": false, 00:10:32.450 "abort": true, 00:10:32.450 "seek_hole": false, 00:10:32.450 "seek_data": false, 00:10:32.450 "copy": true, 00:10:32.450 "nvme_iov_md": false 00:10:32.450 }, 00:10:32.451 "memory_domains": [ 00:10:32.451 { 00:10:32.451 "dma_device_id": "system", 00:10:32.451 "dma_device_type": 1 00:10:32.451 }, 00:10:32.451 { 00:10:32.451 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:32.451 "dma_device_type": 2 00:10:32.451 } 00:10:32.451 ], 00:10:32.451 "driver_specific": {} 00:10:32.451 }' 00:10:32.451 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:32.451 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:32.451 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:32.451 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:32.451 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:32.451 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:32.451 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:32.451 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:32.451 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:32.451 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:32.451 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:32.451 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:32.451 21:09:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:32.708 [2024-07-14 21:09:44.189988] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:32.708 [2024-07-14 21:09:44.190004] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:32.708 [2024-07-14 21:09:44.190032] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:32.708 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:32.966 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:32.966 "name": "Existed_Raid", 00:10:32.966 "uuid": "633e3acb-4225-11ef-aa83-81fbc7dfef58", 00:10:32.966 "strip_size_kb": 64, 00:10:32.966 "state": "offline", 00:10:32.966 "raid_level": "concat", 00:10:32.966 "superblock": false, 00:10:32.966 "num_base_bdevs": 3, 00:10:32.966 "num_base_bdevs_discovered": 2, 00:10:32.966 "num_base_bdevs_operational": 2, 00:10:32.966 "base_bdevs_list": [ 00:10:32.966 { 00:10:32.966 "name": null, 00:10:32.966 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:32.966 "is_configured": false, 00:10:32.966 "data_offset": 0, 00:10:32.966 "data_size": 65536 00:10:32.966 }, 00:10:32.966 { 00:10:32.966 "name": "BaseBdev2", 00:10:32.966 "uuid": "6272f219-4225-11ef-aa83-81fbc7dfef58", 00:10:32.966 "is_configured": true, 00:10:32.966 "data_offset": 0, 00:10:32.966 "data_size": 65536 00:10:32.966 }, 00:10:32.966 { 00:10:32.966 "name": "BaseBdev3", 00:10:32.966 "uuid": "633e34c5-4225-11ef-aa83-81fbc7dfef58", 00:10:32.966 "is_configured": true, 00:10:32.966 "data_offset": 0, 00:10:32.966 "data_size": 65536 00:10:32.966 } 00:10:32.966 ] 00:10:32.966 }' 00:10:32.966 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:32.966 21:09:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:33.224 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:33.224 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:33.224 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.224 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:33.481 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:33.481 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:33.481 21:09:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:33.739 [2024-07-14 21:09:45.216042] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:33.739 21:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:33.739 21:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:33.739 21:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:33.739 21:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:33.997 21:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:33.997 21:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:33.997 21:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:34.255 [2024-07-14 21:09:45.758167] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:34.255 [2024-07-14 21:09:45.758190] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfb487a34a00 name Existed_Raid, state offline 00:10:34.255 21:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:34.255 21:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:34.255 21:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:34.255 21:09:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:34.513 21:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:34.513 21:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:34.513 21:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:34.513 21:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:34.513 21:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:34.513 21:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:34.769 BaseBdev2 00:10:34.769 21:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:34.769 21:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:34.769 21:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:34.769 21:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:34.769 21:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:34.769 21:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:34.769 21:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:35.027 21:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:35.286 [ 00:10:35.286 { 00:10:35.286 "name": "BaseBdev2", 00:10:35.286 "aliases": [ 00:10:35.286 "65e0fbdc-4225-11ef-aa83-81fbc7dfef58" 00:10:35.286 ], 00:10:35.286 "product_name": "Malloc disk", 00:10:35.286 "block_size": 512, 00:10:35.286 "num_blocks": 65536, 00:10:35.286 "uuid": "65e0fbdc-4225-11ef-aa83-81fbc7dfef58", 00:10:35.286 "assigned_rate_limits": { 00:10:35.286 "rw_ios_per_sec": 0, 00:10:35.286 "rw_mbytes_per_sec": 0, 00:10:35.286 "r_mbytes_per_sec": 0, 00:10:35.286 "w_mbytes_per_sec": 0 00:10:35.286 }, 00:10:35.286 "claimed": false, 00:10:35.286 "zoned": false, 00:10:35.286 "supported_io_types": { 00:10:35.286 "read": true, 00:10:35.286 "write": true, 00:10:35.286 "unmap": true, 00:10:35.286 "flush": true, 00:10:35.286 "reset": true, 00:10:35.286 "nvme_admin": false, 00:10:35.286 "nvme_io": false, 00:10:35.286 "nvme_io_md": false, 00:10:35.286 "write_zeroes": true, 00:10:35.286 "zcopy": true, 00:10:35.286 "get_zone_info": false, 00:10:35.286 "zone_management": false, 00:10:35.286 "zone_append": false, 00:10:35.286 "compare": false, 00:10:35.286 "compare_and_write": false, 00:10:35.286 "abort": true, 00:10:35.286 "seek_hole": false, 00:10:35.286 "seek_data": false, 00:10:35.286 "copy": true, 00:10:35.286 "nvme_iov_md": false 00:10:35.286 }, 00:10:35.286 "memory_domains": [ 00:10:35.286 { 00:10:35.286 "dma_device_id": "system", 00:10:35.286 "dma_device_type": 1 00:10:35.286 }, 00:10:35.286 { 00:10:35.286 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.286 "dma_device_type": 2 00:10:35.286 } 00:10:35.286 ], 00:10:35.286 "driver_specific": {} 00:10:35.286 } 00:10:35.286 ] 00:10:35.286 21:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:35.286 21:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:35.286 21:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:35.286 21:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:35.544 BaseBdev3 00:10:35.544 21:09:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:35.544 21:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:35.544 21:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:35.544 21:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:35.544 21:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:35.544 21:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:35.544 21:09:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:35.802 21:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:35.802 [ 00:10:35.802 { 00:10:35.802 "name": "BaseBdev3", 00:10:35.802 "aliases": [ 00:10:35.802 "66416b86-4225-11ef-aa83-81fbc7dfef58" 00:10:35.802 ], 00:10:35.802 "product_name": "Malloc disk", 00:10:35.802 "block_size": 512, 00:10:35.802 "num_blocks": 65536, 00:10:35.802 "uuid": "66416b86-4225-11ef-aa83-81fbc7dfef58", 00:10:35.802 "assigned_rate_limits": { 00:10:35.802 "rw_ios_per_sec": 0, 00:10:35.802 "rw_mbytes_per_sec": 0, 00:10:35.802 "r_mbytes_per_sec": 0, 00:10:35.802 "w_mbytes_per_sec": 0 00:10:35.802 }, 00:10:35.802 "claimed": false, 00:10:35.802 "zoned": false, 00:10:35.802 "supported_io_types": { 00:10:35.802 "read": true, 00:10:35.802 "write": true, 00:10:35.802 "unmap": true, 00:10:35.802 "flush": true, 00:10:35.802 "reset": true, 00:10:35.802 "nvme_admin": false, 00:10:35.802 "nvme_io": false, 00:10:35.802 "nvme_io_md": false, 00:10:35.802 "write_zeroes": true, 00:10:35.802 "zcopy": true, 00:10:35.802 "get_zone_info": false, 00:10:35.802 "zone_management": false, 00:10:35.802 "zone_append": false, 00:10:35.802 "compare": false, 00:10:35.802 "compare_and_write": false, 00:10:35.802 "abort": true, 00:10:35.802 "seek_hole": false, 00:10:35.802 "seek_data": false, 00:10:35.802 "copy": true, 00:10:35.802 "nvme_iov_md": false 00:10:35.802 }, 00:10:35.802 "memory_domains": [ 00:10:35.802 { 00:10:35.802 "dma_device_id": "system", 00:10:35.802 "dma_device_type": 1 00:10:35.802 }, 00:10:35.802 { 00:10:35.802 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.802 "dma_device_type": 2 00:10:35.802 } 00:10:35.802 ], 00:10:35.802 "driver_specific": {} 00:10:35.802 } 00:10:35.802 ] 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:36.061 [2024-07-14 21:09:47.548156] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:36.061 [2024-07-14 21:09:47.548210] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:36.061 [2024-07-14 21:09:47.548234] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:36.061 [2024-07-14 21:09:47.548873] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.061 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:36.319 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:36.319 "name": "Existed_Raid", 00:10:36.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.319 "strip_size_kb": 64, 00:10:36.319 "state": "configuring", 00:10:36.319 "raid_level": "concat", 00:10:36.319 "superblock": false, 00:10:36.319 "num_base_bdevs": 3, 00:10:36.319 "num_base_bdevs_discovered": 2, 00:10:36.319 "num_base_bdevs_operational": 3, 00:10:36.319 "base_bdevs_list": [ 00:10:36.319 { 00:10:36.319 "name": "BaseBdev1", 00:10:36.319 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:36.319 "is_configured": false, 00:10:36.319 "data_offset": 0, 00:10:36.319 "data_size": 0 00:10:36.319 }, 00:10:36.319 { 00:10:36.319 "name": "BaseBdev2", 00:10:36.319 "uuid": "65e0fbdc-4225-11ef-aa83-81fbc7dfef58", 00:10:36.319 "is_configured": true, 00:10:36.319 "data_offset": 0, 00:10:36.319 "data_size": 65536 00:10:36.319 }, 00:10:36.319 { 00:10:36.319 "name": "BaseBdev3", 00:10:36.319 "uuid": "66416b86-4225-11ef-aa83-81fbc7dfef58", 00:10:36.319 "is_configured": true, 00:10:36.319 "data_offset": 0, 00:10:36.319 "data_size": 65536 00:10:36.319 } 00:10:36.319 ] 00:10:36.319 }' 00:10:36.319 21:09:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:36.319 21:09:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:36.578 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:36.836 [2024-07-14 21:09:48.288161] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:36.836 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:36.836 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:36.836 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:36.836 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:36.836 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:36.836 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:36.836 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:36.836 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:36.836 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:36.836 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:36.836 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:36.836 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:37.094 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:37.094 "name": "Existed_Raid", 00:10:37.094 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.094 "strip_size_kb": 64, 00:10:37.094 "state": "configuring", 00:10:37.094 "raid_level": "concat", 00:10:37.094 "superblock": false, 00:10:37.094 "num_base_bdevs": 3, 00:10:37.094 "num_base_bdevs_discovered": 1, 00:10:37.094 "num_base_bdevs_operational": 3, 00:10:37.094 "base_bdevs_list": [ 00:10:37.094 { 00:10:37.094 "name": "BaseBdev1", 00:10:37.095 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:37.095 "is_configured": false, 00:10:37.095 "data_offset": 0, 00:10:37.095 "data_size": 0 00:10:37.095 }, 00:10:37.095 { 00:10:37.095 "name": null, 00:10:37.095 "uuid": "65e0fbdc-4225-11ef-aa83-81fbc7dfef58", 00:10:37.095 "is_configured": false, 00:10:37.095 "data_offset": 0, 00:10:37.095 "data_size": 65536 00:10:37.095 }, 00:10:37.095 { 00:10:37.095 "name": "BaseBdev3", 00:10:37.095 "uuid": "66416b86-4225-11ef-aa83-81fbc7dfef58", 00:10:37.095 "is_configured": true, 00:10:37.095 "data_offset": 0, 00:10:37.095 "data_size": 65536 00:10:37.095 } 00:10:37.095 ] 00:10:37.095 }' 00:10:37.095 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:37.095 21:09:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:37.353 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:37.353 21:09:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:37.611 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:37.611 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:37.870 [2024-07-14 21:09:49.228337] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:37.870 BaseBdev1 00:10:37.870 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:10:37.870 21:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:37.870 21:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:37.870 21:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:37.870 21:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:37.870 21:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:37.870 21:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:38.128 21:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:38.128 [ 00:10:38.128 { 00:10:38.128 "name": "BaseBdev1", 00:10:38.128 "aliases": [ 00:10:38.128 "67ab5df6-4225-11ef-aa83-81fbc7dfef58" 00:10:38.128 ], 00:10:38.128 "product_name": "Malloc disk", 00:10:38.128 "block_size": 512, 00:10:38.128 "num_blocks": 65536, 00:10:38.128 "uuid": "67ab5df6-4225-11ef-aa83-81fbc7dfef58", 00:10:38.128 "assigned_rate_limits": { 00:10:38.128 "rw_ios_per_sec": 0, 00:10:38.128 "rw_mbytes_per_sec": 0, 00:10:38.128 "r_mbytes_per_sec": 0, 00:10:38.128 "w_mbytes_per_sec": 0 00:10:38.128 }, 00:10:38.128 "claimed": true, 00:10:38.128 "claim_type": "exclusive_write", 00:10:38.128 "zoned": false, 00:10:38.128 "supported_io_types": { 00:10:38.128 "read": true, 00:10:38.128 "write": true, 00:10:38.128 "unmap": true, 00:10:38.128 "flush": true, 00:10:38.128 "reset": true, 00:10:38.128 "nvme_admin": false, 00:10:38.128 "nvme_io": false, 00:10:38.128 "nvme_io_md": false, 00:10:38.128 "write_zeroes": true, 00:10:38.128 "zcopy": true, 00:10:38.128 "get_zone_info": false, 00:10:38.128 "zone_management": false, 00:10:38.128 "zone_append": false, 00:10:38.128 "compare": false, 00:10:38.128 "compare_and_write": false, 00:10:38.128 "abort": true, 00:10:38.128 "seek_hole": false, 00:10:38.128 "seek_data": false, 00:10:38.128 "copy": true, 00:10:38.128 "nvme_iov_md": false 00:10:38.128 }, 00:10:38.128 "memory_domains": [ 00:10:38.128 { 00:10:38.128 "dma_device_id": "system", 00:10:38.128 "dma_device_type": 1 00:10:38.128 }, 00:10:38.128 { 00:10:38.128 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:38.128 "dma_device_type": 2 00:10:38.128 } 00:10:38.128 ], 00:10:38.128 "driver_specific": {} 00:10:38.128 } 00:10:38.128 ] 00:10:38.128 21:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:38.128 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:38.128 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:38.128 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:38.128 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:38.128 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:38.128 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:38.128 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:38.128 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:38.128 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:38.128 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:38.128 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.128 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:38.444 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:38.444 "name": "Existed_Raid", 00:10:38.444 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:38.444 "strip_size_kb": 64, 00:10:38.444 "state": "configuring", 00:10:38.444 "raid_level": "concat", 00:10:38.444 "superblock": false, 00:10:38.444 "num_base_bdevs": 3, 00:10:38.444 "num_base_bdevs_discovered": 2, 00:10:38.444 "num_base_bdevs_operational": 3, 00:10:38.444 "base_bdevs_list": [ 00:10:38.444 { 00:10:38.444 "name": "BaseBdev1", 00:10:38.444 "uuid": "67ab5df6-4225-11ef-aa83-81fbc7dfef58", 00:10:38.444 "is_configured": true, 00:10:38.444 "data_offset": 0, 00:10:38.444 "data_size": 65536 00:10:38.444 }, 00:10:38.444 { 00:10:38.444 "name": null, 00:10:38.444 "uuid": "65e0fbdc-4225-11ef-aa83-81fbc7dfef58", 00:10:38.444 "is_configured": false, 00:10:38.444 "data_offset": 0, 00:10:38.444 "data_size": 65536 00:10:38.444 }, 00:10:38.444 { 00:10:38.444 "name": "BaseBdev3", 00:10:38.444 "uuid": "66416b86-4225-11ef-aa83-81fbc7dfef58", 00:10:38.444 "is_configured": true, 00:10:38.444 "data_offset": 0, 00:10:38.444 "data_size": 65536 00:10:38.444 } 00:10:38.444 ] 00:10:38.444 }' 00:10:38.444 21:09:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:38.444 21:09:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:38.702 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:38.702 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:38.962 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:10:38.962 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:10:39.220 [2024-07-14 21:09:50.732332] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:39.220 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:39.220 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:39.220 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:39.220 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:39.220 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:39.220 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:39.220 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:39.220 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:39.220 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:39.220 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:39.220 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.220 21:09:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:39.478 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:39.478 "name": "Existed_Raid", 00:10:39.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:39.478 "strip_size_kb": 64, 00:10:39.478 "state": "configuring", 00:10:39.478 "raid_level": "concat", 00:10:39.478 "superblock": false, 00:10:39.478 "num_base_bdevs": 3, 00:10:39.478 "num_base_bdevs_discovered": 1, 00:10:39.478 "num_base_bdevs_operational": 3, 00:10:39.478 "base_bdevs_list": [ 00:10:39.478 { 00:10:39.478 "name": "BaseBdev1", 00:10:39.478 "uuid": "67ab5df6-4225-11ef-aa83-81fbc7dfef58", 00:10:39.478 "is_configured": true, 00:10:39.478 "data_offset": 0, 00:10:39.478 "data_size": 65536 00:10:39.478 }, 00:10:39.478 { 00:10:39.478 "name": null, 00:10:39.478 "uuid": "65e0fbdc-4225-11ef-aa83-81fbc7dfef58", 00:10:39.478 "is_configured": false, 00:10:39.478 "data_offset": 0, 00:10:39.478 "data_size": 65536 00:10:39.478 }, 00:10:39.478 { 00:10:39.478 "name": null, 00:10:39.478 "uuid": "66416b86-4225-11ef-aa83-81fbc7dfef58", 00:10:39.478 "is_configured": false, 00:10:39.478 "data_offset": 0, 00:10:39.478 "data_size": 65536 00:10:39.478 } 00:10:39.478 ] 00:10:39.478 }' 00:10:39.478 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:39.478 21:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:39.736 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:39.736 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:39.994 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:10:39.994 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:10:40.252 [2024-07-14 21:09:51.708387] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:40.252 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:40.252 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:40.252 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:40.252 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:40.252 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:40.252 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:40.252 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:40.252 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:40.252 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:40.252 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:40.252 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:40.252 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:40.510 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:40.510 "name": "Existed_Raid", 00:10:40.510 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:40.510 "strip_size_kb": 64, 00:10:40.510 "state": "configuring", 00:10:40.510 "raid_level": "concat", 00:10:40.510 "superblock": false, 00:10:40.510 "num_base_bdevs": 3, 00:10:40.510 "num_base_bdevs_discovered": 2, 00:10:40.510 "num_base_bdevs_operational": 3, 00:10:40.510 "base_bdevs_list": [ 00:10:40.510 { 00:10:40.510 "name": "BaseBdev1", 00:10:40.510 "uuid": "67ab5df6-4225-11ef-aa83-81fbc7dfef58", 00:10:40.510 "is_configured": true, 00:10:40.510 "data_offset": 0, 00:10:40.510 "data_size": 65536 00:10:40.510 }, 00:10:40.510 { 00:10:40.510 "name": null, 00:10:40.510 "uuid": "65e0fbdc-4225-11ef-aa83-81fbc7dfef58", 00:10:40.510 "is_configured": false, 00:10:40.510 "data_offset": 0, 00:10:40.510 "data_size": 65536 00:10:40.510 }, 00:10:40.510 { 00:10:40.510 "name": "BaseBdev3", 00:10:40.510 "uuid": "66416b86-4225-11ef-aa83-81fbc7dfef58", 00:10:40.510 "is_configured": true, 00:10:40.510 "data_offset": 0, 00:10:40.510 "data_size": 65536 00:10:40.510 } 00:10:40.510 ] 00:10:40.510 }' 00:10:40.510 21:09:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:40.510 21:09:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.076 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:10:41.076 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.076 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:10:41.076 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:41.334 [2024-07-14 21:09:52.780396] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:41.334 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:41.334 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:41.334 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:41.334 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:41.334 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:41.334 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:41.334 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:41.334 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:41.334 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:41.334 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:41.334 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.334 21:09:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:41.592 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:41.592 "name": "Existed_Raid", 00:10:41.592 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:41.592 "strip_size_kb": 64, 00:10:41.592 "state": "configuring", 00:10:41.592 "raid_level": "concat", 00:10:41.592 "superblock": false, 00:10:41.592 "num_base_bdevs": 3, 00:10:41.592 "num_base_bdevs_discovered": 1, 00:10:41.592 "num_base_bdevs_operational": 3, 00:10:41.592 "base_bdevs_list": [ 00:10:41.592 { 00:10:41.592 "name": null, 00:10:41.592 "uuid": "67ab5df6-4225-11ef-aa83-81fbc7dfef58", 00:10:41.592 "is_configured": false, 00:10:41.592 "data_offset": 0, 00:10:41.592 "data_size": 65536 00:10:41.592 }, 00:10:41.592 { 00:10:41.592 "name": null, 00:10:41.592 "uuid": "65e0fbdc-4225-11ef-aa83-81fbc7dfef58", 00:10:41.592 "is_configured": false, 00:10:41.592 "data_offset": 0, 00:10:41.592 "data_size": 65536 00:10:41.592 }, 00:10:41.592 { 00:10:41.592 "name": "BaseBdev3", 00:10:41.592 "uuid": "66416b86-4225-11ef-aa83-81fbc7dfef58", 00:10:41.592 "is_configured": true, 00:10:41.592 "data_offset": 0, 00:10:41.592 "data_size": 65536 00:10:41.592 } 00:10:41.592 ] 00:10:41.592 }' 00:10:41.592 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:41.592 21:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:41.871 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:41.872 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:10:42.137 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:10:42.137 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:10:42.394 [2024-07-14 21:09:53.738515] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:42.394 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:42.394 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:42.394 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:42.395 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:42.395 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:42.395 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:42.395 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:42.395 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:42.395 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:42.395 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:42.395 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.395 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:42.653 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:42.653 "name": "Existed_Raid", 00:10:42.653 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:42.653 "strip_size_kb": 64, 00:10:42.653 "state": "configuring", 00:10:42.653 "raid_level": "concat", 00:10:42.653 "superblock": false, 00:10:42.653 "num_base_bdevs": 3, 00:10:42.653 "num_base_bdevs_discovered": 2, 00:10:42.653 "num_base_bdevs_operational": 3, 00:10:42.653 "base_bdevs_list": [ 00:10:42.653 { 00:10:42.653 "name": null, 00:10:42.653 "uuid": "67ab5df6-4225-11ef-aa83-81fbc7dfef58", 00:10:42.653 "is_configured": false, 00:10:42.653 "data_offset": 0, 00:10:42.653 "data_size": 65536 00:10:42.653 }, 00:10:42.653 { 00:10:42.653 "name": "BaseBdev2", 00:10:42.653 "uuid": "65e0fbdc-4225-11ef-aa83-81fbc7dfef58", 00:10:42.653 "is_configured": true, 00:10:42.653 "data_offset": 0, 00:10:42.653 "data_size": 65536 00:10:42.653 }, 00:10:42.653 { 00:10:42.653 "name": "BaseBdev3", 00:10:42.653 "uuid": "66416b86-4225-11ef-aa83-81fbc7dfef58", 00:10:42.653 "is_configured": true, 00:10:42.653 "data_offset": 0, 00:10:42.653 "data_size": 65536 00:10:42.653 } 00:10:42.653 ] 00:10:42.653 }' 00:10:42.653 21:09:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:42.653 21:09:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:42.910 21:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:42.910 21:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:43.168 21:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:10:43.168 21:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.168 21:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:10:43.168 21:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 67ab5df6-4225-11ef-aa83-81fbc7dfef58 00:10:43.426 [2024-07-14 21:09:54.890666] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:10:43.426 [2024-07-14 21:09:54.890687] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xfb487a34a00 00:10:43.426 [2024-07-14 21:09:54.890706] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 196608, blocklen 512 00:10:43.426 [2024-07-14 21:09:54.890726] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xfb487a97e20 00:10:43.426 [2024-07-14 21:09:54.890817] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xfb487a34a00 00:10:43.426 [2024-07-14 21:09:54.890821] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xfb487a34a00 00:10:43.426 [2024-07-14 21:09:54.890852] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:43.426 NewBaseBdev 00:10:43.426 21:09:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:10:43.426 21:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:10:43.426 21:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:43.426 21:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:10:43.426 21:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:43.426 21:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:43.426 21:09:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:43.684 21:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:10:43.941 [ 00:10:43.942 { 00:10:43.942 "name": "NewBaseBdev", 00:10:43.942 "aliases": [ 00:10:43.942 "67ab5df6-4225-11ef-aa83-81fbc7dfef58" 00:10:43.942 ], 00:10:43.942 "product_name": "Malloc disk", 00:10:43.942 "block_size": 512, 00:10:43.942 "num_blocks": 65536, 00:10:43.942 "uuid": "67ab5df6-4225-11ef-aa83-81fbc7dfef58", 00:10:43.942 "assigned_rate_limits": { 00:10:43.942 "rw_ios_per_sec": 0, 00:10:43.942 "rw_mbytes_per_sec": 0, 00:10:43.942 "r_mbytes_per_sec": 0, 00:10:43.942 "w_mbytes_per_sec": 0 00:10:43.942 }, 00:10:43.942 "claimed": true, 00:10:43.942 "claim_type": "exclusive_write", 00:10:43.942 "zoned": false, 00:10:43.942 "supported_io_types": { 00:10:43.942 "read": true, 00:10:43.942 "write": true, 00:10:43.942 "unmap": true, 00:10:43.942 "flush": true, 00:10:43.942 "reset": true, 00:10:43.942 "nvme_admin": false, 00:10:43.942 "nvme_io": false, 00:10:43.942 "nvme_io_md": false, 00:10:43.942 "write_zeroes": true, 00:10:43.942 "zcopy": true, 00:10:43.942 "get_zone_info": false, 00:10:43.942 "zone_management": false, 00:10:43.942 "zone_append": false, 00:10:43.942 "compare": false, 00:10:43.942 "compare_and_write": false, 00:10:43.942 "abort": true, 00:10:43.942 "seek_hole": false, 00:10:43.942 "seek_data": false, 00:10:43.942 "copy": true, 00:10:43.942 "nvme_iov_md": false 00:10:43.942 }, 00:10:43.942 "memory_domains": [ 00:10:43.942 { 00:10:43.942 "dma_device_id": "system", 00:10:43.942 "dma_device_type": 1 00:10:43.942 }, 00:10:43.942 { 00:10:43.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:43.942 "dma_device_type": 2 00:10:43.942 } 00:10:43.942 ], 00:10:43.942 "driver_specific": {} 00:10:43.942 } 00:10:43.942 ] 00:10:43.942 21:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:10:43.942 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:43.942 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:43.942 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:43.942 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:43.942 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:43.942 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:43.942 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:43.942 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:43.942 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:43.942 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:43.942 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:43.942 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:44.200 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:44.200 "name": "Existed_Raid", 00:10:44.200 "uuid": "6b0b6669-4225-11ef-aa83-81fbc7dfef58", 00:10:44.200 "strip_size_kb": 64, 00:10:44.200 "state": "online", 00:10:44.200 "raid_level": "concat", 00:10:44.200 "superblock": false, 00:10:44.200 "num_base_bdevs": 3, 00:10:44.200 "num_base_bdevs_discovered": 3, 00:10:44.200 "num_base_bdevs_operational": 3, 00:10:44.200 "base_bdevs_list": [ 00:10:44.200 { 00:10:44.200 "name": "NewBaseBdev", 00:10:44.200 "uuid": "67ab5df6-4225-11ef-aa83-81fbc7dfef58", 00:10:44.200 "is_configured": true, 00:10:44.200 "data_offset": 0, 00:10:44.200 "data_size": 65536 00:10:44.200 }, 00:10:44.200 { 00:10:44.200 "name": "BaseBdev2", 00:10:44.200 "uuid": "65e0fbdc-4225-11ef-aa83-81fbc7dfef58", 00:10:44.200 "is_configured": true, 00:10:44.200 "data_offset": 0, 00:10:44.200 "data_size": 65536 00:10:44.200 }, 00:10:44.200 { 00:10:44.200 "name": "BaseBdev3", 00:10:44.200 "uuid": "66416b86-4225-11ef-aa83-81fbc7dfef58", 00:10:44.200 "is_configured": true, 00:10:44.200 "data_offset": 0, 00:10:44.200 "data_size": 65536 00:10:44.200 } 00:10:44.200 ] 00:10:44.200 }' 00:10:44.200 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:44.200 21:09:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:44.458 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:10:44.458 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:44.458 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:44.458 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:44.458 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:44.458 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:10:44.458 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:44.458 21:09:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:44.716 [2024-07-14 21:09:56.118603] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:44.716 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:44.716 "name": "Existed_Raid", 00:10:44.716 "aliases": [ 00:10:44.716 "6b0b6669-4225-11ef-aa83-81fbc7dfef58" 00:10:44.716 ], 00:10:44.716 "product_name": "Raid Volume", 00:10:44.716 "block_size": 512, 00:10:44.716 "num_blocks": 196608, 00:10:44.716 "uuid": "6b0b6669-4225-11ef-aa83-81fbc7dfef58", 00:10:44.716 "assigned_rate_limits": { 00:10:44.716 "rw_ios_per_sec": 0, 00:10:44.716 "rw_mbytes_per_sec": 0, 00:10:44.716 "r_mbytes_per_sec": 0, 00:10:44.716 "w_mbytes_per_sec": 0 00:10:44.716 }, 00:10:44.716 "claimed": false, 00:10:44.716 "zoned": false, 00:10:44.716 "supported_io_types": { 00:10:44.716 "read": true, 00:10:44.716 "write": true, 00:10:44.716 "unmap": true, 00:10:44.716 "flush": true, 00:10:44.716 "reset": true, 00:10:44.716 "nvme_admin": false, 00:10:44.716 "nvme_io": false, 00:10:44.716 "nvme_io_md": false, 00:10:44.716 "write_zeroes": true, 00:10:44.716 "zcopy": false, 00:10:44.716 "get_zone_info": false, 00:10:44.716 "zone_management": false, 00:10:44.716 "zone_append": false, 00:10:44.716 "compare": false, 00:10:44.716 "compare_and_write": false, 00:10:44.716 "abort": false, 00:10:44.716 "seek_hole": false, 00:10:44.716 "seek_data": false, 00:10:44.716 "copy": false, 00:10:44.716 "nvme_iov_md": false 00:10:44.716 }, 00:10:44.716 "memory_domains": [ 00:10:44.716 { 00:10:44.716 "dma_device_id": "system", 00:10:44.716 "dma_device_type": 1 00:10:44.716 }, 00:10:44.716 { 00:10:44.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.716 "dma_device_type": 2 00:10:44.716 }, 00:10:44.716 { 00:10:44.716 "dma_device_id": "system", 00:10:44.716 "dma_device_type": 1 00:10:44.716 }, 00:10:44.716 { 00:10:44.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.716 "dma_device_type": 2 00:10:44.716 }, 00:10:44.716 { 00:10:44.716 "dma_device_id": "system", 00:10:44.716 "dma_device_type": 1 00:10:44.716 }, 00:10:44.716 { 00:10:44.716 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.716 "dma_device_type": 2 00:10:44.716 } 00:10:44.716 ], 00:10:44.716 "driver_specific": { 00:10:44.716 "raid": { 00:10:44.716 "uuid": "6b0b6669-4225-11ef-aa83-81fbc7dfef58", 00:10:44.716 "strip_size_kb": 64, 00:10:44.716 "state": "online", 00:10:44.716 "raid_level": "concat", 00:10:44.716 "superblock": false, 00:10:44.716 "num_base_bdevs": 3, 00:10:44.716 "num_base_bdevs_discovered": 3, 00:10:44.716 "num_base_bdevs_operational": 3, 00:10:44.716 "base_bdevs_list": [ 00:10:44.716 { 00:10:44.716 "name": "NewBaseBdev", 00:10:44.716 "uuid": "67ab5df6-4225-11ef-aa83-81fbc7dfef58", 00:10:44.716 "is_configured": true, 00:10:44.716 "data_offset": 0, 00:10:44.716 "data_size": 65536 00:10:44.716 }, 00:10:44.716 { 00:10:44.716 "name": "BaseBdev2", 00:10:44.716 "uuid": "65e0fbdc-4225-11ef-aa83-81fbc7dfef58", 00:10:44.716 "is_configured": true, 00:10:44.716 "data_offset": 0, 00:10:44.716 "data_size": 65536 00:10:44.716 }, 00:10:44.716 { 00:10:44.716 "name": "BaseBdev3", 00:10:44.716 "uuid": "66416b86-4225-11ef-aa83-81fbc7dfef58", 00:10:44.716 "is_configured": true, 00:10:44.716 "data_offset": 0, 00:10:44.716 "data_size": 65536 00:10:44.716 } 00:10:44.716 ] 00:10:44.716 } 00:10:44.716 } 00:10:44.716 }' 00:10:44.716 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:44.716 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:10:44.716 BaseBdev2 00:10:44.716 BaseBdev3' 00:10:44.716 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:44.716 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:10:44.716 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:44.974 "name": "NewBaseBdev", 00:10:44.974 "aliases": [ 00:10:44.974 "67ab5df6-4225-11ef-aa83-81fbc7dfef58" 00:10:44.974 ], 00:10:44.974 "product_name": "Malloc disk", 00:10:44.974 "block_size": 512, 00:10:44.974 "num_blocks": 65536, 00:10:44.974 "uuid": "67ab5df6-4225-11ef-aa83-81fbc7dfef58", 00:10:44.974 "assigned_rate_limits": { 00:10:44.974 "rw_ios_per_sec": 0, 00:10:44.974 "rw_mbytes_per_sec": 0, 00:10:44.974 "r_mbytes_per_sec": 0, 00:10:44.974 "w_mbytes_per_sec": 0 00:10:44.974 }, 00:10:44.974 "claimed": true, 00:10:44.974 "claim_type": "exclusive_write", 00:10:44.974 "zoned": false, 00:10:44.974 "supported_io_types": { 00:10:44.974 "read": true, 00:10:44.974 "write": true, 00:10:44.974 "unmap": true, 00:10:44.974 "flush": true, 00:10:44.974 "reset": true, 00:10:44.974 "nvme_admin": false, 00:10:44.974 "nvme_io": false, 00:10:44.974 "nvme_io_md": false, 00:10:44.974 "write_zeroes": true, 00:10:44.974 "zcopy": true, 00:10:44.974 "get_zone_info": false, 00:10:44.974 "zone_management": false, 00:10:44.974 "zone_append": false, 00:10:44.974 "compare": false, 00:10:44.974 "compare_and_write": false, 00:10:44.974 "abort": true, 00:10:44.974 "seek_hole": false, 00:10:44.974 "seek_data": false, 00:10:44.974 "copy": true, 00:10:44.974 "nvme_iov_md": false 00:10:44.974 }, 00:10:44.974 "memory_domains": [ 00:10:44.974 { 00:10:44.974 "dma_device_id": "system", 00:10:44.974 "dma_device_type": 1 00:10:44.974 }, 00:10:44.974 { 00:10:44.974 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:44.974 "dma_device_type": 2 00:10:44.974 } 00:10:44.974 ], 00:10:44.974 "driver_specific": {} 00:10:44.974 }' 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:44.974 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:45.232 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:45.232 "name": "BaseBdev2", 00:10:45.232 "aliases": [ 00:10:45.232 "65e0fbdc-4225-11ef-aa83-81fbc7dfef58" 00:10:45.232 ], 00:10:45.232 "product_name": "Malloc disk", 00:10:45.232 "block_size": 512, 00:10:45.232 "num_blocks": 65536, 00:10:45.232 "uuid": "65e0fbdc-4225-11ef-aa83-81fbc7dfef58", 00:10:45.232 "assigned_rate_limits": { 00:10:45.232 "rw_ios_per_sec": 0, 00:10:45.232 "rw_mbytes_per_sec": 0, 00:10:45.232 "r_mbytes_per_sec": 0, 00:10:45.232 "w_mbytes_per_sec": 0 00:10:45.232 }, 00:10:45.232 "claimed": true, 00:10:45.232 "claim_type": "exclusive_write", 00:10:45.232 "zoned": false, 00:10:45.232 "supported_io_types": { 00:10:45.232 "read": true, 00:10:45.232 "write": true, 00:10:45.232 "unmap": true, 00:10:45.232 "flush": true, 00:10:45.232 "reset": true, 00:10:45.232 "nvme_admin": false, 00:10:45.232 "nvme_io": false, 00:10:45.232 "nvme_io_md": false, 00:10:45.232 "write_zeroes": true, 00:10:45.232 "zcopy": true, 00:10:45.232 "get_zone_info": false, 00:10:45.232 "zone_management": false, 00:10:45.232 "zone_append": false, 00:10:45.232 "compare": false, 00:10:45.232 "compare_and_write": false, 00:10:45.232 "abort": true, 00:10:45.232 "seek_hole": false, 00:10:45.232 "seek_data": false, 00:10:45.232 "copy": true, 00:10:45.232 "nvme_iov_md": false 00:10:45.232 }, 00:10:45.232 "memory_domains": [ 00:10:45.232 { 00:10:45.232 "dma_device_id": "system", 00:10:45.232 "dma_device_type": 1 00:10:45.232 }, 00:10:45.232 { 00:10:45.232 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.232 "dma_device_type": 2 00:10:45.232 } 00:10:45.232 ], 00:10:45.232 "driver_specific": {} 00:10:45.232 }' 00:10:45.232 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:45.232 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:45.232 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:45.232 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:45.490 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:45.490 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:45.490 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:45.490 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:45.490 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:45.490 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:45.490 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:45.490 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:45.490 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:45.490 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:45.490 21:09:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:45.748 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:45.748 "name": "BaseBdev3", 00:10:45.748 "aliases": [ 00:10:45.748 "66416b86-4225-11ef-aa83-81fbc7dfef58" 00:10:45.748 ], 00:10:45.748 "product_name": "Malloc disk", 00:10:45.748 "block_size": 512, 00:10:45.748 "num_blocks": 65536, 00:10:45.748 "uuid": "66416b86-4225-11ef-aa83-81fbc7dfef58", 00:10:45.748 "assigned_rate_limits": { 00:10:45.748 "rw_ios_per_sec": 0, 00:10:45.748 "rw_mbytes_per_sec": 0, 00:10:45.748 "r_mbytes_per_sec": 0, 00:10:45.748 "w_mbytes_per_sec": 0 00:10:45.748 }, 00:10:45.748 "claimed": true, 00:10:45.748 "claim_type": "exclusive_write", 00:10:45.748 "zoned": false, 00:10:45.748 "supported_io_types": { 00:10:45.748 "read": true, 00:10:45.748 "write": true, 00:10:45.748 "unmap": true, 00:10:45.748 "flush": true, 00:10:45.748 "reset": true, 00:10:45.748 "nvme_admin": false, 00:10:45.748 "nvme_io": false, 00:10:45.748 "nvme_io_md": false, 00:10:45.748 "write_zeroes": true, 00:10:45.748 "zcopy": true, 00:10:45.748 "get_zone_info": false, 00:10:45.748 "zone_management": false, 00:10:45.748 "zone_append": false, 00:10:45.748 "compare": false, 00:10:45.748 "compare_and_write": false, 00:10:45.748 "abort": true, 00:10:45.748 "seek_hole": false, 00:10:45.748 "seek_data": false, 00:10:45.748 "copy": true, 00:10:45.748 "nvme_iov_md": false 00:10:45.748 }, 00:10:45.748 "memory_domains": [ 00:10:45.748 { 00:10:45.748 "dma_device_id": "system", 00:10:45.748 "dma_device_type": 1 00:10:45.748 }, 00:10:45.748 { 00:10:45.748 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:45.748 "dma_device_type": 2 00:10:45.748 } 00:10:45.748 ], 00:10:45.748 "driver_specific": {} 00:10:45.748 }' 00:10:45.748 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:45.748 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:45.748 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:45.748 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:45.748 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:45.748 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:45.748 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:45.748 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:45.748 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:45.748 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:45.748 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:45.748 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:45.748 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:46.006 [2024-07-14 21:09:57.366642] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:46.006 [2024-07-14 21:09:57.366658] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:46.006 [2024-07-14 21:09:57.366708] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:46.006 [2024-07-14 21:09:57.366737] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:10:46.006 [2024-07-14 21:09:57.366740] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xfb487a34a00 name Existed_Raid, state offline 00:10:46.006 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 53987 00:10:46.006 21:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 53987 ']' 00:10:46.006 21:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 53987 00:10:46.006 21:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:10:46.006 21:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:10:46.006 21:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:10:46.006 21:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 53987 00:10:46.006 21:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:10:46.006 21:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:10:46.006 killing process with pid 53987 00:10:46.007 21:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 53987' 00:10:46.007 21:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 53987 00:10:46.007 [2024-07-14 21:09:57.392481] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:10:46.007 21:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 53987 00:10:46.007 [2024-07-14 21:09:57.410125] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:10:46.265 00:10:46.265 real 0m22.020s 00:10:46.265 user 0m39.994s 00:10:46.265 sys 0m3.219s 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:46.265 ************************************ 00:10:46.265 END TEST raid_state_function_test 00:10:46.265 ************************************ 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:10:46.265 21:09:57 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:10:46.265 21:09:57 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 3 true 00:10:46.265 21:09:57 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:10:46.265 21:09:57 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:46.265 21:09:57 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:10:46.265 ************************************ 00:10:46.265 START TEST raid_state_function_test_sb 00:10:46.265 ************************************ 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 3 true 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=54708 00:10:46.265 Process raid pid: 54708 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 54708' 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 54708 /var/tmp/spdk-raid.sock 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 54708 ']' 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:46.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:46.265 21:09:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:46.265 [2024-07-14 21:09:57.649380] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:10:46.265 [2024-07-14 21:09:57.649657] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:10:46.830 EAL: TSC is not safe to use in SMP mode 00:10:46.830 EAL: TSC is not invariant 00:10:46.830 [2024-07-14 21:09:58.163005] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:46.830 [2024-07-14 21:09:58.249083] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:10:46.830 [2024-07-14 21:09:58.251436] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:46.830 [2024-07-14 21:09:58.252298] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:46.830 [2024-07-14 21:09:58.252312] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:10:47.396 21:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:47.396 21:09:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:10:47.396 21:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:47.396 [2024-07-14 21:09:58.856777] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:47.396 [2024-07-14 21:09:58.856829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:47.396 [2024-07-14 21:09:58.856833] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:47.396 [2024-07-14 21:09:58.856858] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:47.396 [2024-07-14 21:09:58.856861] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:47.396 [2024-07-14 21:09:58.856867] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:47.396 21:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:47.396 21:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:47.396 21:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:47.396 21:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:47.396 21:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:47.396 21:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:47.396 21:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:47.396 21:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:47.396 21:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:47.396 21:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:47.396 21:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:47.396 21:09:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:47.654 21:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:47.654 "name": "Existed_Raid", 00:10:47.654 "uuid": "6d6892a8-4225-11ef-aa83-81fbc7dfef58", 00:10:47.654 "strip_size_kb": 64, 00:10:47.654 "state": "configuring", 00:10:47.654 "raid_level": "concat", 00:10:47.654 "superblock": true, 00:10:47.654 "num_base_bdevs": 3, 00:10:47.654 "num_base_bdevs_discovered": 0, 00:10:47.654 "num_base_bdevs_operational": 3, 00:10:47.654 "base_bdevs_list": [ 00:10:47.654 { 00:10:47.654 "name": "BaseBdev1", 00:10:47.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.654 "is_configured": false, 00:10:47.654 "data_offset": 0, 00:10:47.654 "data_size": 0 00:10:47.654 }, 00:10:47.654 { 00:10:47.654 "name": "BaseBdev2", 00:10:47.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.654 "is_configured": false, 00:10:47.654 "data_offset": 0, 00:10:47.654 "data_size": 0 00:10:47.654 }, 00:10:47.654 { 00:10:47.654 "name": "BaseBdev3", 00:10:47.654 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:47.654 "is_configured": false, 00:10:47.654 "data_offset": 0, 00:10:47.654 "data_size": 0 00:10:47.654 } 00:10:47.654 ] 00:10:47.654 }' 00:10:47.654 21:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:47.654 21:09:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:47.911 21:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:48.169 [2024-07-14 21:09:59.632796] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:48.169 [2024-07-14 21:09:59.632812] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x904d3034500 name Existed_Raid, state configuring 00:10:48.169 21:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:48.427 [2024-07-14 21:09:59.892822] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:48.427 [2024-07-14 21:09:59.892874] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:48.427 [2024-07-14 21:09:59.892878] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:48.427 [2024-07-14 21:09:59.892902] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:48.427 [2024-07-14 21:09:59.892905] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:48.427 [2024-07-14 21:09:59.892911] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:48.427 21:09:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:10:48.685 [2024-07-14 21:10:00.093726] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:48.685 BaseBdev1 00:10:48.685 21:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:10:48.685 21:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:10:48.685 21:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:48.685 21:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:48.685 21:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:48.685 21:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:48.685 21:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:48.943 21:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:10:49.203 [ 00:10:49.204 { 00:10:49.204 "name": "BaseBdev1", 00:10:49.204 "aliases": [ 00:10:49.204 "6e252e01-4225-11ef-aa83-81fbc7dfef58" 00:10:49.204 ], 00:10:49.204 "product_name": "Malloc disk", 00:10:49.204 "block_size": 512, 00:10:49.204 "num_blocks": 65536, 00:10:49.204 "uuid": "6e252e01-4225-11ef-aa83-81fbc7dfef58", 00:10:49.204 "assigned_rate_limits": { 00:10:49.204 "rw_ios_per_sec": 0, 00:10:49.204 "rw_mbytes_per_sec": 0, 00:10:49.204 "r_mbytes_per_sec": 0, 00:10:49.204 "w_mbytes_per_sec": 0 00:10:49.204 }, 00:10:49.204 "claimed": true, 00:10:49.204 "claim_type": "exclusive_write", 00:10:49.204 "zoned": false, 00:10:49.204 "supported_io_types": { 00:10:49.204 "read": true, 00:10:49.204 "write": true, 00:10:49.204 "unmap": true, 00:10:49.204 "flush": true, 00:10:49.204 "reset": true, 00:10:49.204 "nvme_admin": false, 00:10:49.204 "nvme_io": false, 00:10:49.204 "nvme_io_md": false, 00:10:49.204 "write_zeroes": true, 00:10:49.204 "zcopy": true, 00:10:49.204 "get_zone_info": false, 00:10:49.204 "zone_management": false, 00:10:49.204 "zone_append": false, 00:10:49.204 "compare": false, 00:10:49.204 "compare_and_write": false, 00:10:49.204 "abort": true, 00:10:49.204 "seek_hole": false, 00:10:49.204 "seek_data": false, 00:10:49.204 "copy": true, 00:10:49.204 "nvme_iov_md": false 00:10:49.204 }, 00:10:49.204 "memory_domains": [ 00:10:49.204 { 00:10:49.204 "dma_device_id": "system", 00:10:49.204 "dma_device_type": 1 00:10:49.204 }, 00:10:49.204 { 00:10:49.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:49.204 "dma_device_type": 2 00:10:49.204 } 00:10:49.204 ], 00:10:49.204 "driver_specific": {} 00:10:49.204 } 00:10:49.204 ] 00:10:49.204 21:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:49.204 21:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:49.204 21:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:49.204 21:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:49.204 21:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:49.204 21:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:49.204 21:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:49.204 21:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:49.204 21:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:49.204 21:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:49.204 21:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:49.204 21:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:49.204 21:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:49.462 21:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:49.462 "name": "Existed_Raid", 00:10:49.462 "uuid": "6e06a937-4225-11ef-aa83-81fbc7dfef58", 00:10:49.462 "strip_size_kb": 64, 00:10:49.462 "state": "configuring", 00:10:49.462 "raid_level": "concat", 00:10:49.462 "superblock": true, 00:10:49.462 "num_base_bdevs": 3, 00:10:49.462 "num_base_bdevs_discovered": 1, 00:10:49.462 "num_base_bdevs_operational": 3, 00:10:49.462 "base_bdevs_list": [ 00:10:49.462 { 00:10:49.462 "name": "BaseBdev1", 00:10:49.462 "uuid": "6e252e01-4225-11ef-aa83-81fbc7dfef58", 00:10:49.462 "is_configured": true, 00:10:49.462 "data_offset": 2048, 00:10:49.462 "data_size": 63488 00:10:49.462 }, 00:10:49.462 { 00:10:49.462 "name": "BaseBdev2", 00:10:49.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.462 "is_configured": false, 00:10:49.462 "data_offset": 0, 00:10:49.462 "data_size": 0 00:10:49.462 }, 00:10:49.462 { 00:10:49.462 "name": "BaseBdev3", 00:10:49.462 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:49.462 "is_configured": false, 00:10:49.462 "data_offset": 0, 00:10:49.463 "data_size": 0 00:10:49.463 } 00:10:49.463 ] 00:10:49.463 }' 00:10:49.463 21:10:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:49.463 21:10:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:49.721 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:10:49.979 [2024-07-14 21:10:01.420846] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:10:49.979 [2024-07-14 21:10:01.420889] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x904d3034500 name Existed_Raid, state configuring 00:10:49.979 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:50.238 [2024-07-14 21:10:01.680873] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:10:50.238 [2024-07-14 21:10:01.681774] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:10:50.238 [2024-07-14 21:10:01.681836] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:10:50.238 [2024-07-14 21:10:01.681840] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:10:50.238 [2024-07-14 21:10:01.681864] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:10:50.238 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:10:50.238 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:50.238 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:50.238 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:50.238 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:50.238 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:50.238 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:50.238 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:50.238 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:50.238 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:50.238 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:50.238 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:50.238 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:50.238 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:50.496 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:50.496 "name": "Existed_Raid", 00:10:50.496 "uuid": "6f177ed4-4225-11ef-aa83-81fbc7dfef58", 00:10:50.496 "strip_size_kb": 64, 00:10:50.496 "state": "configuring", 00:10:50.496 "raid_level": "concat", 00:10:50.496 "superblock": true, 00:10:50.496 "num_base_bdevs": 3, 00:10:50.496 "num_base_bdevs_discovered": 1, 00:10:50.496 "num_base_bdevs_operational": 3, 00:10:50.496 "base_bdevs_list": [ 00:10:50.496 { 00:10:50.496 "name": "BaseBdev1", 00:10:50.496 "uuid": "6e252e01-4225-11ef-aa83-81fbc7dfef58", 00:10:50.496 "is_configured": true, 00:10:50.496 "data_offset": 2048, 00:10:50.496 "data_size": 63488 00:10:50.496 }, 00:10:50.496 { 00:10:50.496 "name": "BaseBdev2", 00:10:50.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.496 "is_configured": false, 00:10:50.496 "data_offset": 0, 00:10:50.496 "data_size": 0 00:10:50.496 }, 00:10:50.496 { 00:10:50.496 "name": "BaseBdev3", 00:10:50.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:50.496 "is_configured": false, 00:10:50.496 "data_offset": 0, 00:10:50.496 "data_size": 0 00:10:50.496 } 00:10:50.496 ] 00:10:50.496 }' 00:10:50.496 21:10:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:50.496 21:10:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:50.754 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:51.013 [2024-07-14 21:10:02.477056] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:51.013 BaseBdev2 00:10:51.013 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:10:51.013 21:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:51.013 21:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:51.013 21:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:51.013 21:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:51.013 21:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:51.013 21:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:51.271 21:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:51.529 [ 00:10:51.529 { 00:10:51.529 "name": "BaseBdev2", 00:10:51.529 "aliases": [ 00:10:51.529 "6f90f67a-4225-11ef-aa83-81fbc7dfef58" 00:10:51.529 ], 00:10:51.529 "product_name": "Malloc disk", 00:10:51.529 "block_size": 512, 00:10:51.529 "num_blocks": 65536, 00:10:51.529 "uuid": "6f90f67a-4225-11ef-aa83-81fbc7dfef58", 00:10:51.529 "assigned_rate_limits": { 00:10:51.529 "rw_ios_per_sec": 0, 00:10:51.529 "rw_mbytes_per_sec": 0, 00:10:51.529 "r_mbytes_per_sec": 0, 00:10:51.529 "w_mbytes_per_sec": 0 00:10:51.529 }, 00:10:51.529 "claimed": true, 00:10:51.529 "claim_type": "exclusive_write", 00:10:51.529 "zoned": false, 00:10:51.529 "supported_io_types": { 00:10:51.529 "read": true, 00:10:51.529 "write": true, 00:10:51.529 "unmap": true, 00:10:51.529 "flush": true, 00:10:51.529 "reset": true, 00:10:51.529 "nvme_admin": false, 00:10:51.529 "nvme_io": false, 00:10:51.529 "nvme_io_md": false, 00:10:51.529 "write_zeroes": true, 00:10:51.529 "zcopy": true, 00:10:51.529 "get_zone_info": false, 00:10:51.529 "zone_management": false, 00:10:51.529 "zone_append": false, 00:10:51.529 "compare": false, 00:10:51.529 "compare_and_write": false, 00:10:51.529 "abort": true, 00:10:51.529 "seek_hole": false, 00:10:51.529 "seek_data": false, 00:10:51.529 "copy": true, 00:10:51.529 "nvme_iov_md": false 00:10:51.529 }, 00:10:51.529 "memory_domains": [ 00:10:51.529 { 00:10:51.529 "dma_device_id": "system", 00:10:51.529 "dma_device_type": 1 00:10:51.529 }, 00:10:51.529 { 00:10:51.529 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:51.529 "dma_device_type": 2 00:10:51.529 } 00:10:51.529 ], 00:10:51.529 "driver_specific": {} 00:10:51.529 } 00:10:51.529 ] 00:10:51.529 21:10:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:51.529 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:51.529 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:51.529 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:51.529 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:51.529 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:51.530 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:51.530 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:51.530 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:51.530 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:51.530 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:51.530 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:51.530 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:51.530 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:51.530 21:10:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:51.788 21:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:51.788 "name": "Existed_Raid", 00:10:51.788 "uuid": "6f177ed4-4225-11ef-aa83-81fbc7dfef58", 00:10:51.788 "strip_size_kb": 64, 00:10:51.788 "state": "configuring", 00:10:51.788 "raid_level": "concat", 00:10:51.788 "superblock": true, 00:10:51.788 "num_base_bdevs": 3, 00:10:51.788 "num_base_bdevs_discovered": 2, 00:10:51.788 "num_base_bdevs_operational": 3, 00:10:51.788 "base_bdevs_list": [ 00:10:51.788 { 00:10:51.788 "name": "BaseBdev1", 00:10:51.788 "uuid": "6e252e01-4225-11ef-aa83-81fbc7dfef58", 00:10:51.788 "is_configured": true, 00:10:51.788 "data_offset": 2048, 00:10:51.788 "data_size": 63488 00:10:51.788 }, 00:10:51.788 { 00:10:51.788 "name": "BaseBdev2", 00:10:51.788 "uuid": "6f90f67a-4225-11ef-aa83-81fbc7dfef58", 00:10:51.788 "is_configured": true, 00:10:51.788 "data_offset": 2048, 00:10:51.788 "data_size": 63488 00:10:51.788 }, 00:10:51.788 { 00:10:51.788 "name": "BaseBdev3", 00:10:51.788 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:51.788 "is_configured": false, 00:10:51.788 "data_offset": 0, 00:10:51.788 "data_size": 0 00:10:51.788 } 00:10:51.788 ] 00:10:51.788 }' 00:10:51.788 21:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:51.788 21:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:52.045 21:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:52.302 [2024-07-14 21:10:03.741138] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:52.302 [2024-07-14 21:10:03.741231] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x904d3034a00 00:10:52.302 [2024-07-14 21:10:03.741238] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:10:52.302 [2024-07-14 21:10:03.741270] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x904d3097e20 00:10:52.302 [2024-07-14 21:10:03.741333] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x904d3034a00 00:10:52.302 [2024-07-14 21:10:03.741338] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x904d3034a00 00:10:52.302 [2024-07-14 21:10:03.741360] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:10:52.302 BaseBdev3 00:10:52.302 21:10:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:10:52.302 21:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:52.302 21:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:52.302 21:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:52.302 21:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:52.302 21:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:52.302 21:10:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:52.560 21:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:52.818 [ 00:10:52.818 { 00:10:52.818 "name": "BaseBdev3", 00:10:52.818 "aliases": [ 00:10:52.818 "7051d7dc-4225-11ef-aa83-81fbc7dfef58" 00:10:52.818 ], 00:10:52.818 "product_name": "Malloc disk", 00:10:52.818 "block_size": 512, 00:10:52.818 "num_blocks": 65536, 00:10:52.818 "uuid": "7051d7dc-4225-11ef-aa83-81fbc7dfef58", 00:10:52.818 "assigned_rate_limits": { 00:10:52.818 "rw_ios_per_sec": 0, 00:10:52.818 "rw_mbytes_per_sec": 0, 00:10:52.818 "r_mbytes_per_sec": 0, 00:10:52.818 "w_mbytes_per_sec": 0 00:10:52.818 }, 00:10:52.818 "claimed": true, 00:10:52.818 "claim_type": "exclusive_write", 00:10:52.818 "zoned": false, 00:10:52.818 "supported_io_types": { 00:10:52.818 "read": true, 00:10:52.818 "write": true, 00:10:52.818 "unmap": true, 00:10:52.818 "flush": true, 00:10:52.818 "reset": true, 00:10:52.818 "nvme_admin": false, 00:10:52.818 "nvme_io": false, 00:10:52.818 "nvme_io_md": false, 00:10:52.818 "write_zeroes": true, 00:10:52.818 "zcopy": true, 00:10:52.818 "get_zone_info": false, 00:10:52.818 "zone_management": false, 00:10:52.818 "zone_append": false, 00:10:52.818 "compare": false, 00:10:52.818 "compare_and_write": false, 00:10:52.818 "abort": true, 00:10:52.818 "seek_hole": false, 00:10:52.818 "seek_data": false, 00:10:52.818 "copy": true, 00:10:52.818 "nvme_iov_md": false 00:10:52.818 }, 00:10:52.818 "memory_domains": [ 00:10:52.818 { 00:10:52.818 "dma_device_id": "system", 00:10:52.818 "dma_device_type": 1 00:10:52.818 }, 00:10:52.818 { 00:10:52.818 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:52.818 "dma_device_type": 2 00:10:52.818 } 00:10:52.818 ], 00:10:52.818 "driver_specific": {} 00:10:52.818 } 00:10:52.818 ] 00:10:52.818 21:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:52.818 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:10:52.818 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:10:52.818 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:10:52.818 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:52.818 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:10:52.818 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:52.818 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:52.818 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:52.818 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:52.818 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:52.818 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:52.818 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:52.818 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:52.818 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:53.075 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:53.075 "name": "Existed_Raid", 00:10:53.075 "uuid": "6f177ed4-4225-11ef-aa83-81fbc7dfef58", 00:10:53.075 "strip_size_kb": 64, 00:10:53.075 "state": "online", 00:10:53.075 "raid_level": "concat", 00:10:53.075 "superblock": true, 00:10:53.075 "num_base_bdevs": 3, 00:10:53.075 "num_base_bdevs_discovered": 3, 00:10:53.075 "num_base_bdevs_operational": 3, 00:10:53.075 "base_bdevs_list": [ 00:10:53.075 { 00:10:53.076 "name": "BaseBdev1", 00:10:53.076 "uuid": "6e252e01-4225-11ef-aa83-81fbc7dfef58", 00:10:53.076 "is_configured": true, 00:10:53.076 "data_offset": 2048, 00:10:53.076 "data_size": 63488 00:10:53.076 }, 00:10:53.076 { 00:10:53.076 "name": "BaseBdev2", 00:10:53.076 "uuid": "6f90f67a-4225-11ef-aa83-81fbc7dfef58", 00:10:53.076 "is_configured": true, 00:10:53.076 "data_offset": 2048, 00:10:53.076 "data_size": 63488 00:10:53.076 }, 00:10:53.076 { 00:10:53.076 "name": "BaseBdev3", 00:10:53.076 "uuid": "7051d7dc-4225-11ef-aa83-81fbc7dfef58", 00:10:53.076 "is_configured": true, 00:10:53.076 "data_offset": 2048, 00:10:53.076 "data_size": 63488 00:10:53.076 } 00:10:53.076 ] 00:10:53.076 }' 00:10:53.076 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:53.076 21:10:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:53.333 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:10:53.333 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:10:53.333 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:10:53.333 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:10:53.333 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:10:53.333 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:10:53.333 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:10:53.333 21:10:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:10:53.591 [2024-07-14 21:10:05.129023] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:10:53.849 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:10:53.849 "name": "Existed_Raid", 00:10:53.849 "aliases": [ 00:10:53.849 "6f177ed4-4225-11ef-aa83-81fbc7dfef58" 00:10:53.849 ], 00:10:53.849 "product_name": "Raid Volume", 00:10:53.849 "block_size": 512, 00:10:53.849 "num_blocks": 190464, 00:10:53.849 "uuid": "6f177ed4-4225-11ef-aa83-81fbc7dfef58", 00:10:53.849 "assigned_rate_limits": { 00:10:53.849 "rw_ios_per_sec": 0, 00:10:53.849 "rw_mbytes_per_sec": 0, 00:10:53.849 "r_mbytes_per_sec": 0, 00:10:53.849 "w_mbytes_per_sec": 0 00:10:53.849 }, 00:10:53.849 "claimed": false, 00:10:53.849 "zoned": false, 00:10:53.849 "supported_io_types": { 00:10:53.849 "read": true, 00:10:53.849 "write": true, 00:10:53.849 "unmap": true, 00:10:53.849 "flush": true, 00:10:53.849 "reset": true, 00:10:53.849 "nvme_admin": false, 00:10:53.849 "nvme_io": false, 00:10:53.849 "nvme_io_md": false, 00:10:53.849 "write_zeroes": true, 00:10:53.849 "zcopy": false, 00:10:53.849 "get_zone_info": false, 00:10:53.849 "zone_management": false, 00:10:53.849 "zone_append": false, 00:10:53.849 "compare": false, 00:10:53.849 "compare_and_write": false, 00:10:53.849 "abort": false, 00:10:53.849 "seek_hole": false, 00:10:53.849 "seek_data": false, 00:10:53.849 "copy": false, 00:10:53.849 "nvme_iov_md": false 00:10:53.849 }, 00:10:53.849 "memory_domains": [ 00:10:53.849 { 00:10:53.849 "dma_device_id": "system", 00:10:53.849 "dma_device_type": 1 00:10:53.849 }, 00:10:53.849 { 00:10:53.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.849 "dma_device_type": 2 00:10:53.849 }, 00:10:53.849 { 00:10:53.849 "dma_device_id": "system", 00:10:53.849 "dma_device_type": 1 00:10:53.849 }, 00:10:53.849 { 00:10:53.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.849 "dma_device_type": 2 00:10:53.849 }, 00:10:53.849 { 00:10:53.849 "dma_device_id": "system", 00:10:53.849 "dma_device_type": 1 00:10:53.849 }, 00:10:53.849 { 00:10:53.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:53.849 "dma_device_type": 2 00:10:53.849 } 00:10:53.849 ], 00:10:53.849 "driver_specific": { 00:10:53.849 "raid": { 00:10:53.849 "uuid": "6f177ed4-4225-11ef-aa83-81fbc7dfef58", 00:10:53.849 "strip_size_kb": 64, 00:10:53.849 "state": "online", 00:10:53.849 "raid_level": "concat", 00:10:53.849 "superblock": true, 00:10:53.849 "num_base_bdevs": 3, 00:10:53.849 "num_base_bdevs_discovered": 3, 00:10:53.849 "num_base_bdevs_operational": 3, 00:10:53.849 "base_bdevs_list": [ 00:10:53.849 { 00:10:53.849 "name": "BaseBdev1", 00:10:53.849 "uuid": "6e252e01-4225-11ef-aa83-81fbc7dfef58", 00:10:53.849 "is_configured": true, 00:10:53.849 "data_offset": 2048, 00:10:53.849 "data_size": 63488 00:10:53.849 }, 00:10:53.849 { 00:10:53.849 "name": "BaseBdev2", 00:10:53.849 "uuid": "6f90f67a-4225-11ef-aa83-81fbc7dfef58", 00:10:53.849 "is_configured": true, 00:10:53.849 "data_offset": 2048, 00:10:53.849 "data_size": 63488 00:10:53.849 }, 00:10:53.849 { 00:10:53.849 "name": "BaseBdev3", 00:10:53.849 "uuid": "7051d7dc-4225-11ef-aa83-81fbc7dfef58", 00:10:53.849 "is_configured": true, 00:10:53.849 "data_offset": 2048, 00:10:53.849 "data_size": 63488 00:10:53.849 } 00:10:53.849 ] 00:10:53.849 } 00:10:53.849 } 00:10:53.849 }' 00:10:53.849 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:10:53.849 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:10:53.849 BaseBdev2 00:10:53.849 BaseBdev3' 00:10:53.849 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:53.849 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:10:53.849 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:54.107 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:54.107 "name": "BaseBdev1", 00:10:54.107 "aliases": [ 00:10:54.107 "6e252e01-4225-11ef-aa83-81fbc7dfef58" 00:10:54.107 ], 00:10:54.107 "product_name": "Malloc disk", 00:10:54.107 "block_size": 512, 00:10:54.107 "num_blocks": 65536, 00:10:54.107 "uuid": "6e252e01-4225-11ef-aa83-81fbc7dfef58", 00:10:54.107 "assigned_rate_limits": { 00:10:54.107 "rw_ios_per_sec": 0, 00:10:54.107 "rw_mbytes_per_sec": 0, 00:10:54.107 "r_mbytes_per_sec": 0, 00:10:54.107 "w_mbytes_per_sec": 0 00:10:54.107 }, 00:10:54.107 "claimed": true, 00:10:54.107 "claim_type": "exclusive_write", 00:10:54.107 "zoned": false, 00:10:54.107 "supported_io_types": { 00:10:54.107 "read": true, 00:10:54.107 "write": true, 00:10:54.107 "unmap": true, 00:10:54.107 "flush": true, 00:10:54.107 "reset": true, 00:10:54.107 "nvme_admin": false, 00:10:54.107 "nvme_io": false, 00:10:54.107 "nvme_io_md": false, 00:10:54.107 "write_zeroes": true, 00:10:54.107 "zcopy": true, 00:10:54.107 "get_zone_info": false, 00:10:54.107 "zone_management": false, 00:10:54.107 "zone_append": false, 00:10:54.107 "compare": false, 00:10:54.107 "compare_and_write": false, 00:10:54.107 "abort": true, 00:10:54.107 "seek_hole": false, 00:10:54.107 "seek_data": false, 00:10:54.107 "copy": true, 00:10:54.107 "nvme_iov_md": false 00:10:54.107 }, 00:10:54.107 "memory_domains": [ 00:10:54.107 { 00:10:54.107 "dma_device_id": "system", 00:10:54.107 "dma_device_type": 1 00:10:54.107 }, 00:10:54.107 { 00:10:54.107 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.107 "dma_device_type": 2 00:10:54.107 } 00:10:54.107 ], 00:10:54.107 "driver_specific": {} 00:10:54.107 }' 00:10:54.107 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:54.107 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:54.107 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:54.107 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:54.107 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:54.107 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:54.107 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:54.108 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:54.108 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:54.108 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:54.108 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:54.108 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:54.108 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:54.108 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:10:54.108 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:54.366 "name": "BaseBdev2", 00:10:54.366 "aliases": [ 00:10:54.366 "6f90f67a-4225-11ef-aa83-81fbc7dfef58" 00:10:54.366 ], 00:10:54.366 "product_name": "Malloc disk", 00:10:54.366 "block_size": 512, 00:10:54.366 "num_blocks": 65536, 00:10:54.366 "uuid": "6f90f67a-4225-11ef-aa83-81fbc7dfef58", 00:10:54.366 "assigned_rate_limits": { 00:10:54.366 "rw_ios_per_sec": 0, 00:10:54.366 "rw_mbytes_per_sec": 0, 00:10:54.366 "r_mbytes_per_sec": 0, 00:10:54.366 "w_mbytes_per_sec": 0 00:10:54.366 }, 00:10:54.366 "claimed": true, 00:10:54.366 "claim_type": "exclusive_write", 00:10:54.366 "zoned": false, 00:10:54.366 "supported_io_types": { 00:10:54.366 "read": true, 00:10:54.366 "write": true, 00:10:54.366 "unmap": true, 00:10:54.366 "flush": true, 00:10:54.366 "reset": true, 00:10:54.366 "nvme_admin": false, 00:10:54.366 "nvme_io": false, 00:10:54.366 "nvme_io_md": false, 00:10:54.366 "write_zeroes": true, 00:10:54.366 "zcopy": true, 00:10:54.366 "get_zone_info": false, 00:10:54.366 "zone_management": false, 00:10:54.366 "zone_append": false, 00:10:54.366 "compare": false, 00:10:54.366 "compare_and_write": false, 00:10:54.366 "abort": true, 00:10:54.366 "seek_hole": false, 00:10:54.366 "seek_data": false, 00:10:54.366 "copy": true, 00:10:54.366 "nvme_iov_md": false 00:10:54.366 }, 00:10:54.366 "memory_domains": [ 00:10:54.366 { 00:10:54.366 "dma_device_id": "system", 00:10:54.366 "dma_device_type": 1 00:10:54.366 }, 00:10:54.366 { 00:10:54.366 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.366 "dma_device_type": 2 00:10:54.366 } 00:10:54.366 ], 00:10:54.366 "driver_specific": {} 00:10:54.366 }' 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:10:54.366 21:10:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:10:54.624 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:10:54.624 "name": "BaseBdev3", 00:10:54.624 "aliases": [ 00:10:54.624 "7051d7dc-4225-11ef-aa83-81fbc7dfef58" 00:10:54.624 ], 00:10:54.624 "product_name": "Malloc disk", 00:10:54.624 "block_size": 512, 00:10:54.624 "num_blocks": 65536, 00:10:54.624 "uuid": "7051d7dc-4225-11ef-aa83-81fbc7dfef58", 00:10:54.624 "assigned_rate_limits": { 00:10:54.624 "rw_ios_per_sec": 0, 00:10:54.624 "rw_mbytes_per_sec": 0, 00:10:54.624 "r_mbytes_per_sec": 0, 00:10:54.624 "w_mbytes_per_sec": 0 00:10:54.624 }, 00:10:54.624 "claimed": true, 00:10:54.624 "claim_type": "exclusive_write", 00:10:54.624 "zoned": false, 00:10:54.624 "supported_io_types": { 00:10:54.624 "read": true, 00:10:54.624 "write": true, 00:10:54.624 "unmap": true, 00:10:54.624 "flush": true, 00:10:54.624 "reset": true, 00:10:54.624 "nvme_admin": false, 00:10:54.624 "nvme_io": false, 00:10:54.624 "nvme_io_md": false, 00:10:54.624 "write_zeroes": true, 00:10:54.624 "zcopy": true, 00:10:54.625 "get_zone_info": false, 00:10:54.625 "zone_management": false, 00:10:54.625 "zone_append": false, 00:10:54.625 "compare": false, 00:10:54.625 "compare_and_write": false, 00:10:54.625 "abort": true, 00:10:54.625 "seek_hole": false, 00:10:54.625 "seek_data": false, 00:10:54.625 "copy": true, 00:10:54.625 "nvme_iov_md": false 00:10:54.625 }, 00:10:54.625 "memory_domains": [ 00:10:54.625 { 00:10:54.625 "dma_device_id": "system", 00:10:54.625 "dma_device_type": 1 00:10:54.625 }, 00:10:54.625 { 00:10:54.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.625 "dma_device_type": 2 00:10:54.625 } 00:10:54.625 ], 00:10:54.625 "driver_specific": {} 00:10:54.625 }' 00:10:54.625 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:54.625 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:10:54.625 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:10:54.625 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:54.625 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:10:54.625 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:10:54.625 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:54.625 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:10:54.625 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:10:54.625 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:54.625 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:10:54.625 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:10:54.625 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:10:54.883 [2024-07-14 21:10:06.389033] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:10:54.883 [2024-07-14 21:10:06.389052] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:10:54.883 [2024-07-14 21:10:06.389080] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 2 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:54.883 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:55.141 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:55.141 "name": "Existed_Raid", 00:10:55.141 "uuid": "6f177ed4-4225-11ef-aa83-81fbc7dfef58", 00:10:55.141 "strip_size_kb": 64, 00:10:55.141 "state": "offline", 00:10:55.141 "raid_level": "concat", 00:10:55.141 "superblock": true, 00:10:55.141 "num_base_bdevs": 3, 00:10:55.141 "num_base_bdevs_discovered": 2, 00:10:55.141 "num_base_bdevs_operational": 2, 00:10:55.141 "base_bdevs_list": [ 00:10:55.141 { 00:10:55.141 "name": null, 00:10:55.141 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:55.141 "is_configured": false, 00:10:55.141 "data_offset": 2048, 00:10:55.141 "data_size": 63488 00:10:55.141 }, 00:10:55.141 { 00:10:55.141 "name": "BaseBdev2", 00:10:55.141 "uuid": "6f90f67a-4225-11ef-aa83-81fbc7dfef58", 00:10:55.141 "is_configured": true, 00:10:55.141 "data_offset": 2048, 00:10:55.141 "data_size": 63488 00:10:55.141 }, 00:10:55.141 { 00:10:55.141 "name": "BaseBdev3", 00:10:55.141 "uuid": "7051d7dc-4225-11ef-aa83-81fbc7dfef58", 00:10:55.141 "is_configured": true, 00:10:55.141 "data_offset": 2048, 00:10:55.141 "data_size": 63488 00:10:55.141 } 00:10:55.141 ] 00:10:55.141 }' 00:10:55.141 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:55.141 21:10:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:55.707 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:10:55.707 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:55.707 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.707 21:10:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:55.707 21:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:55.707 21:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:55.708 21:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:10:55.965 [2024-07-14 21:10:07.443240] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:55.965 21:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:55.965 21:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:55.965 21:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:55.965 21:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:10:56.223 21:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:10:56.223 21:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:10:56.223 21:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:10:56.481 [2024-07-14 21:10:07.917358] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:10:56.481 [2024-07-14 21:10:07.917421] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x904d3034a00 name Existed_Raid, state offline 00:10:56.481 21:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:10:56.481 21:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:10:56.481 21:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:56.481 21:10:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:10:56.739 21:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:10:56.739 21:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:10:56.739 21:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:10:56.739 21:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:10:56.739 21:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:56.739 21:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:10:56.997 BaseBdev2 00:10:56.997 21:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:10:56.997 21:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:10:56.997 21:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:56.997 21:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:56.997 21:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:56.997 21:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:56.997 21:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:57.255 21:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:10:57.513 [ 00:10:57.513 { 00:10:57.513 "name": "BaseBdev2", 00:10:57.513 "aliases": [ 00:10:57.513 "73195db3-4225-11ef-aa83-81fbc7dfef58" 00:10:57.513 ], 00:10:57.513 "product_name": "Malloc disk", 00:10:57.513 "block_size": 512, 00:10:57.513 "num_blocks": 65536, 00:10:57.513 "uuid": "73195db3-4225-11ef-aa83-81fbc7dfef58", 00:10:57.513 "assigned_rate_limits": { 00:10:57.513 "rw_ios_per_sec": 0, 00:10:57.513 "rw_mbytes_per_sec": 0, 00:10:57.513 "r_mbytes_per_sec": 0, 00:10:57.513 "w_mbytes_per_sec": 0 00:10:57.513 }, 00:10:57.513 "claimed": false, 00:10:57.513 "zoned": false, 00:10:57.513 "supported_io_types": { 00:10:57.513 "read": true, 00:10:57.513 "write": true, 00:10:57.513 "unmap": true, 00:10:57.513 "flush": true, 00:10:57.513 "reset": true, 00:10:57.513 "nvme_admin": false, 00:10:57.513 "nvme_io": false, 00:10:57.513 "nvme_io_md": false, 00:10:57.513 "write_zeroes": true, 00:10:57.513 "zcopy": true, 00:10:57.513 "get_zone_info": false, 00:10:57.513 "zone_management": false, 00:10:57.513 "zone_append": false, 00:10:57.513 "compare": false, 00:10:57.513 "compare_and_write": false, 00:10:57.513 "abort": true, 00:10:57.513 "seek_hole": false, 00:10:57.513 "seek_data": false, 00:10:57.513 "copy": true, 00:10:57.513 "nvme_iov_md": false 00:10:57.513 }, 00:10:57.513 "memory_domains": [ 00:10:57.513 { 00:10:57.513 "dma_device_id": "system", 00:10:57.513 "dma_device_type": 1 00:10:57.513 }, 00:10:57.513 { 00:10:57.513 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:57.513 "dma_device_type": 2 00:10:57.513 } 00:10:57.513 ], 00:10:57.513 "driver_specific": {} 00:10:57.513 } 00:10:57.513 ] 00:10:57.513 21:10:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:57.513 21:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:57.513 21:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:57.513 21:10:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:10:57.771 BaseBdev3 00:10:57.771 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:10:57.771 21:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:10:57.771 21:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:10:57.771 21:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:10:57.771 21:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:10:57.771 21:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:10:57.771 21:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:10:58.029 21:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:10:58.287 [ 00:10:58.287 { 00:10:58.287 "name": "BaseBdev3", 00:10:58.287 "aliases": [ 00:10:58.287 "738df22a-4225-11ef-aa83-81fbc7dfef58" 00:10:58.287 ], 00:10:58.287 "product_name": "Malloc disk", 00:10:58.287 "block_size": 512, 00:10:58.287 "num_blocks": 65536, 00:10:58.287 "uuid": "738df22a-4225-11ef-aa83-81fbc7dfef58", 00:10:58.287 "assigned_rate_limits": { 00:10:58.287 "rw_ios_per_sec": 0, 00:10:58.287 "rw_mbytes_per_sec": 0, 00:10:58.287 "r_mbytes_per_sec": 0, 00:10:58.287 "w_mbytes_per_sec": 0 00:10:58.287 }, 00:10:58.287 "claimed": false, 00:10:58.287 "zoned": false, 00:10:58.287 "supported_io_types": { 00:10:58.287 "read": true, 00:10:58.287 "write": true, 00:10:58.287 "unmap": true, 00:10:58.287 "flush": true, 00:10:58.287 "reset": true, 00:10:58.287 "nvme_admin": false, 00:10:58.287 "nvme_io": false, 00:10:58.287 "nvme_io_md": false, 00:10:58.287 "write_zeroes": true, 00:10:58.287 "zcopy": true, 00:10:58.287 "get_zone_info": false, 00:10:58.287 "zone_management": false, 00:10:58.287 "zone_append": false, 00:10:58.287 "compare": false, 00:10:58.287 "compare_and_write": false, 00:10:58.287 "abort": true, 00:10:58.287 "seek_hole": false, 00:10:58.287 "seek_data": false, 00:10:58.287 "copy": true, 00:10:58.287 "nvme_iov_md": false 00:10:58.287 }, 00:10:58.287 "memory_domains": [ 00:10:58.287 { 00:10:58.287 "dma_device_id": "system", 00:10:58.287 "dma_device_type": 1 00:10:58.287 }, 00:10:58.287 { 00:10:58.287 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:58.287 "dma_device_type": 2 00:10:58.287 } 00:10:58.287 ], 00:10:58.287 "driver_specific": {} 00:10:58.287 } 00:10:58.287 ] 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:10:58.287 [2024-07-14 21:10:09.800037] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:10:58.287 [2024-07-14 21:10:09.800131] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:10:58.287 [2024-07-14 21:10:09.800156] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:10:58.287 [2024-07-14 21:10:09.800778] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:58.287 21:10:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:58.545 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:58.545 "name": "Existed_Raid", 00:10:58.545 "uuid": "73ee61c2-4225-11ef-aa83-81fbc7dfef58", 00:10:58.545 "strip_size_kb": 64, 00:10:58.545 "state": "configuring", 00:10:58.545 "raid_level": "concat", 00:10:58.545 "superblock": true, 00:10:58.545 "num_base_bdevs": 3, 00:10:58.545 "num_base_bdevs_discovered": 2, 00:10:58.545 "num_base_bdevs_operational": 3, 00:10:58.545 "base_bdevs_list": [ 00:10:58.545 { 00:10:58.545 "name": "BaseBdev1", 00:10:58.545 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:58.545 "is_configured": false, 00:10:58.545 "data_offset": 0, 00:10:58.545 "data_size": 0 00:10:58.545 }, 00:10:58.545 { 00:10:58.545 "name": "BaseBdev2", 00:10:58.545 "uuid": "73195db3-4225-11ef-aa83-81fbc7dfef58", 00:10:58.545 "is_configured": true, 00:10:58.545 "data_offset": 2048, 00:10:58.545 "data_size": 63488 00:10:58.545 }, 00:10:58.545 { 00:10:58.545 "name": "BaseBdev3", 00:10:58.545 "uuid": "738df22a-4225-11ef-aa83-81fbc7dfef58", 00:10:58.545 "is_configured": true, 00:10:58.545 "data_offset": 2048, 00:10:58.545 "data_size": 63488 00:10:58.545 } 00:10:58.545 ] 00:10:58.545 }' 00:10:58.545 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:58.545 21:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.111 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:10:59.111 [2024-07-14 21:10:10.604104] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:10:59.111 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:10:59.111 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:10:59.111 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:10:59.111 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:10:59.111 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:10:59.111 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:10:59.111 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:10:59.111 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:10:59.111 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:10:59.111 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:10:59.111 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.111 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:10:59.369 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:10:59.369 "name": "Existed_Raid", 00:10:59.369 "uuid": "73ee61c2-4225-11ef-aa83-81fbc7dfef58", 00:10:59.369 "strip_size_kb": 64, 00:10:59.369 "state": "configuring", 00:10:59.369 "raid_level": "concat", 00:10:59.369 "superblock": true, 00:10:59.369 "num_base_bdevs": 3, 00:10:59.369 "num_base_bdevs_discovered": 1, 00:10:59.369 "num_base_bdevs_operational": 3, 00:10:59.369 "base_bdevs_list": [ 00:10:59.369 { 00:10:59.369 "name": "BaseBdev1", 00:10:59.369 "uuid": "00000000-0000-0000-0000-000000000000", 00:10:59.369 "is_configured": false, 00:10:59.369 "data_offset": 0, 00:10:59.369 "data_size": 0 00:10:59.369 }, 00:10:59.369 { 00:10:59.369 "name": null, 00:10:59.369 "uuid": "73195db3-4225-11ef-aa83-81fbc7dfef58", 00:10:59.369 "is_configured": false, 00:10:59.369 "data_offset": 2048, 00:10:59.369 "data_size": 63488 00:10:59.369 }, 00:10:59.369 { 00:10:59.369 "name": "BaseBdev3", 00:10:59.369 "uuid": "738df22a-4225-11ef-aa83-81fbc7dfef58", 00:10:59.369 "is_configured": true, 00:10:59.369 "data_offset": 2048, 00:10:59.369 "data_size": 63488 00:10:59.369 } 00:10:59.369 ] 00:10:59.369 }' 00:10:59.369 21:10:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:10:59.369 21:10:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:10:59.627 21:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:10:59.627 21:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:10:59.886 21:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:10:59.886 21:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:00.145 [2024-07-14 21:10:11.620215] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:00.145 BaseBdev1 00:11:00.145 21:10:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:00.145 21:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:00.145 21:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:00.145 21:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:00.145 21:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:00.145 21:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:00.145 21:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:00.403 21:10:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:00.670 [ 00:11:00.670 { 00:11:00.670 "name": "BaseBdev1", 00:11:00.670 "aliases": [ 00:11:00.670 "75041983-4225-11ef-aa83-81fbc7dfef58" 00:11:00.670 ], 00:11:00.670 "product_name": "Malloc disk", 00:11:00.670 "block_size": 512, 00:11:00.670 "num_blocks": 65536, 00:11:00.670 "uuid": "75041983-4225-11ef-aa83-81fbc7dfef58", 00:11:00.670 "assigned_rate_limits": { 00:11:00.670 "rw_ios_per_sec": 0, 00:11:00.670 "rw_mbytes_per_sec": 0, 00:11:00.670 "r_mbytes_per_sec": 0, 00:11:00.670 "w_mbytes_per_sec": 0 00:11:00.670 }, 00:11:00.670 "claimed": true, 00:11:00.670 "claim_type": "exclusive_write", 00:11:00.670 "zoned": false, 00:11:00.670 "supported_io_types": { 00:11:00.670 "read": true, 00:11:00.670 "write": true, 00:11:00.670 "unmap": true, 00:11:00.670 "flush": true, 00:11:00.670 "reset": true, 00:11:00.670 "nvme_admin": false, 00:11:00.670 "nvme_io": false, 00:11:00.670 "nvme_io_md": false, 00:11:00.670 "write_zeroes": true, 00:11:00.670 "zcopy": true, 00:11:00.670 "get_zone_info": false, 00:11:00.670 "zone_management": false, 00:11:00.670 "zone_append": false, 00:11:00.670 "compare": false, 00:11:00.670 "compare_and_write": false, 00:11:00.670 "abort": true, 00:11:00.670 "seek_hole": false, 00:11:00.670 "seek_data": false, 00:11:00.670 "copy": true, 00:11:00.670 "nvme_iov_md": false 00:11:00.670 }, 00:11:00.670 "memory_domains": [ 00:11:00.670 { 00:11:00.670 "dma_device_id": "system", 00:11:00.670 "dma_device_type": 1 00:11:00.670 }, 00:11:00.670 { 00:11:00.670 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:00.670 "dma_device_type": 2 00:11:00.670 } 00:11:00.670 ], 00:11:00.670 "driver_specific": {} 00:11:00.670 } 00:11:00.670 ] 00:11:00.670 21:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:00.670 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:00.670 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:00.670 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:00.670 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:00.670 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:00.670 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:00.670 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:00.670 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:00.670 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:00.670 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:00.670 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:00.670 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:00.942 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:00.942 "name": "Existed_Raid", 00:11:00.942 "uuid": "73ee61c2-4225-11ef-aa83-81fbc7dfef58", 00:11:00.942 "strip_size_kb": 64, 00:11:00.942 "state": "configuring", 00:11:00.942 "raid_level": "concat", 00:11:00.942 "superblock": true, 00:11:00.942 "num_base_bdevs": 3, 00:11:00.942 "num_base_bdevs_discovered": 2, 00:11:00.942 "num_base_bdevs_operational": 3, 00:11:00.942 "base_bdevs_list": [ 00:11:00.942 { 00:11:00.942 "name": "BaseBdev1", 00:11:00.942 "uuid": "75041983-4225-11ef-aa83-81fbc7dfef58", 00:11:00.942 "is_configured": true, 00:11:00.942 "data_offset": 2048, 00:11:00.942 "data_size": 63488 00:11:00.942 }, 00:11:00.942 { 00:11:00.942 "name": null, 00:11:00.942 "uuid": "73195db3-4225-11ef-aa83-81fbc7dfef58", 00:11:00.942 "is_configured": false, 00:11:00.942 "data_offset": 2048, 00:11:00.942 "data_size": 63488 00:11:00.942 }, 00:11:00.942 { 00:11:00.942 "name": "BaseBdev3", 00:11:00.942 "uuid": "738df22a-4225-11ef-aa83-81fbc7dfef58", 00:11:00.942 "is_configured": true, 00:11:00.942 "data_offset": 2048, 00:11:00.942 "data_size": 63488 00:11:00.942 } 00:11:00.942 ] 00:11:00.942 }' 00:11:00.942 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:00.942 21:10:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:01.200 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:01.200 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:01.459 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:01.459 21:10:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:01.717 [2024-07-14 21:10:13.136167] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:01.717 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:01.717 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:01.717 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:01.717 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:01.717 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:01.717 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:01.717 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:01.717 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:01.717 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:01.717 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:01.717 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:01.717 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:01.975 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:01.975 "name": "Existed_Raid", 00:11:01.975 "uuid": "73ee61c2-4225-11ef-aa83-81fbc7dfef58", 00:11:01.975 "strip_size_kb": 64, 00:11:01.975 "state": "configuring", 00:11:01.975 "raid_level": "concat", 00:11:01.975 "superblock": true, 00:11:01.975 "num_base_bdevs": 3, 00:11:01.975 "num_base_bdevs_discovered": 1, 00:11:01.975 "num_base_bdevs_operational": 3, 00:11:01.975 "base_bdevs_list": [ 00:11:01.975 { 00:11:01.975 "name": "BaseBdev1", 00:11:01.975 "uuid": "75041983-4225-11ef-aa83-81fbc7dfef58", 00:11:01.975 "is_configured": true, 00:11:01.975 "data_offset": 2048, 00:11:01.975 "data_size": 63488 00:11:01.975 }, 00:11:01.975 { 00:11:01.975 "name": null, 00:11:01.975 "uuid": "73195db3-4225-11ef-aa83-81fbc7dfef58", 00:11:01.975 "is_configured": false, 00:11:01.975 "data_offset": 2048, 00:11:01.975 "data_size": 63488 00:11:01.975 }, 00:11:01.975 { 00:11:01.975 "name": null, 00:11:01.975 "uuid": "738df22a-4225-11ef-aa83-81fbc7dfef58", 00:11:01.975 "is_configured": false, 00:11:01.975 "data_offset": 2048, 00:11:01.975 "data_size": 63488 00:11:01.975 } 00:11:01.975 ] 00:11:01.975 }' 00:11:01.975 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:01.975 21:10:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:02.233 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:02.233 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:02.491 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:02.491 21:10:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:02.750 [2024-07-14 21:10:14.176207] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:02.750 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:02.750 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:02.750 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:02.750 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:02.750 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:02.750 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:02.750 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:02.750 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:02.750 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:02.750 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:02.750 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:02.750 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:03.008 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:03.008 "name": "Existed_Raid", 00:11:03.008 "uuid": "73ee61c2-4225-11ef-aa83-81fbc7dfef58", 00:11:03.008 "strip_size_kb": 64, 00:11:03.008 "state": "configuring", 00:11:03.008 "raid_level": "concat", 00:11:03.008 "superblock": true, 00:11:03.008 "num_base_bdevs": 3, 00:11:03.008 "num_base_bdevs_discovered": 2, 00:11:03.008 "num_base_bdevs_operational": 3, 00:11:03.008 "base_bdevs_list": [ 00:11:03.008 { 00:11:03.008 "name": "BaseBdev1", 00:11:03.008 "uuid": "75041983-4225-11ef-aa83-81fbc7dfef58", 00:11:03.008 "is_configured": true, 00:11:03.008 "data_offset": 2048, 00:11:03.008 "data_size": 63488 00:11:03.008 }, 00:11:03.008 { 00:11:03.008 "name": null, 00:11:03.008 "uuid": "73195db3-4225-11ef-aa83-81fbc7dfef58", 00:11:03.008 "is_configured": false, 00:11:03.008 "data_offset": 2048, 00:11:03.008 "data_size": 63488 00:11:03.008 }, 00:11:03.008 { 00:11:03.008 "name": "BaseBdev3", 00:11:03.008 "uuid": "738df22a-4225-11ef-aa83-81fbc7dfef58", 00:11:03.008 "is_configured": true, 00:11:03.008 "data_offset": 2048, 00:11:03.008 "data_size": 63488 00:11:03.008 } 00:11:03.008 ] 00:11:03.008 }' 00:11:03.008 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:03.008 21:10:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:03.267 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:03.267 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.525 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:03.525 21:10:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:03.784 [2024-07-14 21:10:15.140214] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:03.784 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:03.784 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:03.784 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:03.784 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:03.784 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:03.784 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:03.784 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:03.784 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:03.784 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:03.784 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:03.784 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:03.784 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:04.043 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:04.043 "name": "Existed_Raid", 00:11:04.043 "uuid": "73ee61c2-4225-11ef-aa83-81fbc7dfef58", 00:11:04.043 "strip_size_kb": 64, 00:11:04.043 "state": "configuring", 00:11:04.043 "raid_level": "concat", 00:11:04.043 "superblock": true, 00:11:04.043 "num_base_bdevs": 3, 00:11:04.043 "num_base_bdevs_discovered": 1, 00:11:04.043 "num_base_bdevs_operational": 3, 00:11:04.043 "base_bdevs_list": [ 00:11:04.043 { 00:11:04.043 "name": null, 00:11:04.043 "uuid": "75041983-4225-11ef-aa83-81fbc7dfef58", 00:11:04.043 "is_configured": false, 00:11:04.043 "data_offset": 2048, 00:11:04.043 "data_size": 63488 00:11:04.043 }, 00:11:04.043 { 00:11:04.043 "name": null, 00:11:04.043 "uuid": "73195db3-4225-11ef-aa83-81fbc7dfef58", 00:11:04.043 "is_configured": false, 00:11:04.043 "data_offset": 2048, 00:11:04.043 "data_size": 63488 00:11:04.043 }, 00:11:04.043 { 00:11:04.043 "name": "BaseBdev3", 00:11:04.043 "uuid": "738df22a-4225-11ef-aa83-81fbc7dfef58", 00:11:04.043 "is_configured": true, 00:11:04.043 "data_offset": 2048, 00:11:04.043 "data_size": 63488 00:11:04.043 } 00:11:04.043 ] 00:11:04.043 }' 00:11:04.043 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:04.043 21:10:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:04.302 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:04.302 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:04.560 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:04.560 21:10:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:04.818 [2024-07-14 21:10:16.202329] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:04.818 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 3 00:11:04.818 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:04.818 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:04.818 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:04.818 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:04.818 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:04.818 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:04.819 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:04.819 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:04.819 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:04.819 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:04.819 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:05.077 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:05.077 "name": "Existed_Raid", 00:11:05.077 "uuid": "73ee61c2-4225-11ef-aa83-81fbc7dfef58", 00:11:05.077 "strip_size_kb": 64, 00:11:05.077 "state": "configuring", 00:11:05.077 "raid_level": "concat", 00:11:05.077 "superblock": true, 00:11:05.077 "num_base_bdevs": 3, 00:11:05.077 "num_base_bdevs_discovered": 2, 00:11:05.077 "num_base_bdevs_operational": 3, 00:11:05.077 "base_bdevs_list": [ 00:11:05.077 { 00:11:05.077 "name": null, 00:11:05.077 "uuid": "75041983-4225-11ef-aa83-81fbc7dfef58", 00:11:05.077 "is_configured": false, 00:11:05.077 "data_offset": 2048, 00:11:05.077 "data_size": 63488 00:11:05.077 }, 00:11:05.077 { 00:11:05.077 "name": "BaseBdev2", 00:11:05.077 "uuid": "73195db3-4225-11ef-aa83-81fbc7dfef58", 00:11:05.077 "is_configured": true, 00:11:05.077 "data_offset": 2048, 00:11:05.077 "data_size": 63488 00:11:05.077 }, 00:11:05.077 { 00:11:05.077 "name": "BaseBdev3", 00:11:05.077 "uuid": "738df22a-4225-11ef-aa83-81fbc7dfef58", 00:11:05.077 "is_configured": true, 00:11:05.077 "data_offset": 2048, 00:11:05.077 "data_size": 63488 00:11:05.077 } 00:11:05.077 ] 00:11:05.077 }' 00:11:05.077 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:05.077 21:10:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:05.335 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.335 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:05.593 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:05.593 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:05.593 21:10:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:05.852 21:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 75041983-4225-11ef-aa83-81fbc7dfef58 00:11:05.852 [2024-07-14 21:10:17.390495] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:05.852 [2024-07-14 21:10:17.390556] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x904d3034a00 00:11:05.852 [2024-07-14 21:10:17.390561] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:05.852 [2024-07-14 21:10:17.390580] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x904d3097e20 00:11:05.852 [2024-07-14 21:10:17.390625] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x904d3034a00 00:11:05.852 [2024-07-14 21:10:17.390629] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x904d3034a00 00:11:05.852 [2024-07-14 21:10:17.390647] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:05.852 NewBaseBdev 00:11:06.110 21:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:06.110 21:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:11:06.110 21:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:06.110 21:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:06.110 21:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:06.110 21:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:06.110 21:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:06.110 21:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:06.368 [ 00:11:06.368 { 00:11:06.368 "name": "NewBaseBdev", 00:11:06.368 "aliases": [ 00:11:06.368 "75041983-4225-11ef-aa83-81fbc7dfef58" 00:11:06.368 ], 00:11:06.368 "product_name": "Malloc disk", 00:11:06.368 "block_size": 512, 00:11:06.368 "num_blocks": 65536, 00:11:06.368 "uuid": "75041983-4225-11ef-aa83-81fbc7dfef58", 00:11:06.368 "assigned_rate_limits": { 00:11:06.368 "rw_ios_per_sec": 0, 00:11:06.368 "rw_mbytes_per_sec": 0, 00:11:06.368 "r_mbytes_per_sec": 0, 00:11:06.368 "w_mbytes_per_sec": 0 00:11:06.368 }, 00:11:06.368 "claimed": true, 00:11:06.368 "claim_type": "exclusive_write", 00:11:06.368 "zoned": false, 00:11:06.368 "supported_io_types": { 00:11:06.368 "read": true, 00:11:06.368 "write": true, 00:11:06.368 "unmap": true, 00:11:06.368 "flush": true, 00:11:06.368 "reset": true, 00:11:06.368 "nvme_admin": false, 00:11:06.368 "nvme_io": false, 00:11:06.368 "nvme_io_md": false, 00:11:06.368 "write_zeroes": true, 00:11:06.368 "zcopy": true, 00:11:06.368 "get_zone_info": false, 00:11:06.368 "zone_management": false, 00:11:06.368 "zone_append": false, 00:11:06.369 "compare": false, 00:11:06.369 "compare_and_write": false, 00:11:06.369 "abort": true, 00:11:06.369 "seek_hole": false, 00:11:06.369 "seek_data": false, 00:11:06.369 "copy": true, 00:11:06.369 "nvme_iov_md": false 00:11:06.369 }, 00:11:06.369 "memory_domains": [ 00:11:06.369 { 00:11:06.369 "dma_device_id": "system", 00:11:06.369 "dma_device_type": 1 00:11:06.369 }, 00:11:06.369 { 00:11:06.369 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:06.369 "dma_device_type": 2 00:11:06.369 } 00:11:06.369 ], 00:11:06.369 "driver_specific": {} 00:11:06.369 } 00:11:06.369 ] 00:11:06.369 21:10:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:06.369 21:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 3 00:11:06.369 21:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:06.369 21:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:06.369 21:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:06.369 21:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:06.369 21:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:06.369 21:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:06.369 21:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:06.369 21:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:06.369 21:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:06.369 21:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:06.369 21:10:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:06.628 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:06.628 "name": "Existed_Raid", 00:11:06.628 "uuid": "73ee61c2-4225-11ef-aa83-81fbc7dfef58", 00:11:06.628 "strip_size_kb": 64, 00:11:06.628 "state": "online", 00:11:06.628 "raid_level": "concat", 00:11:06.628 "superblock": true, 00:11:06.628 "num_base_bdevs": 3, 00:11:06.628 "num_base_bdevs_discovered": 3, 00:11:06.628 "num_base_bdevs_operational": 3, 00:11:06.628 "base_bdevs_list": [ 00:11:06.628 { 00:11:06.628 "name": "NewBaseBdev", 00:11:06.628 "uuid": "75041983-4225-11ef-aa83-81fbc7dfef58", 00:11:06.628 "is_configured": true, 00:11:06.628 "data_offset": 2048, 00:11:06.628 "data_size": 63488 00:11:06.628 }, 00:11:06.628 { 00:11:06.628 "name": "BaseBdev2", 00:11:06.628 "uuid": "73195db3-4225-11ef-aa83-81fbc7dfef58", 00:11:06.628 "is_configured": true, 00:11:06.628 "data_offset": 2048, 00:11:06.628 "data_size": 63488 00:11:06.628 }, 00:11:06.628 { 00:11:06.628 "name": "BaseBdev3", 00:11:06.628 "uuid": "738df22a-4225-11ef-aa83-81fbc7dfef58", 00:11:06.628 "is_configured": true, 00:11:06.628 "data_offset": 2048, 00:11:06.628 "data_size": 63488 00:11:06.628 } 00:11:06.628 ] 00:11:06.628 }' 00:11:06.628 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:06.628 21:10:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:06.887 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:06.887 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:06.887 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:06.887 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:06.887 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:06.887 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:11:06.887 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:06.887 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:07.146 [2024-07-14 21:10:18.662443] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:07.146 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:07.146 "name": "Existed_Raid", 00:11:07.146 "aliases": [ 00:11:07.146 "73ee61c2-4225-11ef-aa83-81fbc7dfef58" 00:11:07.146 ], 00:11:07.146 "product_name": "Raid Volume", 00:11:07.146 "block_size": 512, 00:11:07.146 "num_blocks": 190464, 00:11:07.146 "uuid": "73ee61c2-4225-11ef-aa83-81fbc7dfef58", 00:11:07.146 "assigned_rate_limits": { 00:11:07.146 "rw_ios_per_sec": 0, 00:11:07.146 "rw_mbytes_per_sec": 0, 00:11:07.146 "r_mbytes_per_sec": 0, 00:11:07.146 "w_mbytes_per_sec": 0 00:11:07.146 }, 00:11:07.146 "claimed": false, 00:11:07.146 "zoned": false, 00:11:07.146 "supported_io_types": { 00:11:07.146 "read": true, 00:11:07.146 "write": true, 00:11:07.146 "unmap": true, 00:11:07.146 "flush": true, 00:11:07.146 "reset": true, 00:11:07.146 "nvme_admin": false, 00:11:07.146 "nvme_io": false, 00:11:07.146 "nvme_io_md": false, 00:11:07.146 "write_zeroes": true, 00:11:07.146 "zcopy": false, 00:11:07.146 "get_zone_info": false, 00:11:07.146 "zone_management": false, 00:11:07.146 "zone_append": false, 00:11:07.146 "compare": false, 00:11:07.146 "compare_and_write": false, 00:11:07.146 "abort": false, 00:11:07.146 "seek_hole": false, 00:11:07.146 "seek_data": false, 00:11:07.146 "copy": false, 00:11:07.146 "nvme_iov_md": false 00:11:07.146 }, 00:11:07.146 "memory_domains": [ 00:11:07.146 { 00:11:07.146 "dma_device_id": "system", 00:11:07.146 "dma_device_type": 1 00:11:07.146 }, 00:11:07.146 { 00:11:07.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.146 "dma_device_type": 2 00:11:07.146 }, 00:11:07.146 { 00:11:07.146 "dma_device_id": "system", 00:11:07.146 "dma_device_type": 1 00:11:07.146 }, 00:11:07.146 { 00:11:07.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.146 "dma_device_type": 2 00:11:07.146 }, 00:11:07.146 { 00:11:07.146 "dma_device_id": "system", 00:11:07.146 "dma_device_type": 1 00:11:07.146 }, 00:11:07.146 { 00:11:07.146 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.146 "dma_device_type": 2 00:11:07.146 } 00:11:07.146 ], 00:11:07.146 "driver_specific": { 00:11:07.146 "raid": { 00:11:07.146 "uuid": "73ee61c2-4225-11ef-aa83-81fbc7dfef58", 00:11:07.146 "strip_size_kb": 64, 00:11:07.146 "state": "online", 00:11:07.146 "raid_level": "concat", 00:11:07.146 "superblock": true, 00:11:07.146 "num_base_bdevs": 3, 00:11:07.146 "num_base_bdevs_discovered": 3, 00:11:07.146 "num_base_bdevs_operational": 3, 00:11:07.146 "base_bdevs_list": [ 00:11:07.146 { 00:11:07.146 "name": "NewBaseBdev", 00:11:07.146 "uuid": "75041983-4225-11ef-aa83-81fbc7dfef58", 00:11:07.146 "is_configured": true, 00:11:07.146 "data_offset": 2048, 00:11:07.146 "data_size": 63488 00:11:07.146 }, 00:11:07.146 { 00:11:07.146 "name": "BaseBdev2", 00:11:07.146 "uuid": "73195db3-4225-11ef-aa83-81fbc7dfef58", 00:11:07.146 "is_configured": true, 00:11:07.146 "data_offset": 2048, 00:11:07.146 "data_size": 63488 00:11:07.146 }, 00:11:07.146 { 00:11:07.146 "name": "BaseBdev3", 00:11:07.146 "uuid": "738df22a-4225-11ef-aa83-81fbc7dfef58", 00:11:07.146 "is_configured": true, 00:11:07.146 "data_offset": 2048, 00:11:07.146 "data_size": 63488 00:11:07.146 } 00:11:07.146 ] 00:11:07.146 } 00:11:07.146 } 00:11:07.146 }' 00:11:07.146 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:07.146 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:07.146 BaseBdev2 00:11:07.146 BaseBdev3' 00:11:07.146 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:07.146 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:07.146 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:07.405 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:07.405 "name": "NewBaseBdev", 00:11:07.405 "aliases": [ 00:11:07.405 "75041983-4225-11ef-aa83-81fbc7dfef58" 00:11:07.405 ], 00:11:07.405 "product_name": "Malloc disk", 00:11:07.405 "block_size": 512, 00:11:07.405 "num_blocks": 65536, 00:11:07.405 "uuid": "75041983-4225-11ef-aa83-81fbc7dfef58", 00:11:07.405 "assigned_rate_limits": { 00:11:07.405 "rw_ios_per_sec": 0, 00:11:07.405 "rw_mbytes_per_sec": 0, 00:11:07.405 "r_mbytes_per_sec": 0, 00:11:07.405 "w_mbytes_per_sec": 0 00:11:07.405 }, 00:11:07.405 "claimed": true, 00:11:07.405 "claim_type": "exclusive_write", 00:11:07.405 "zoned": false, 00:11:07.405 "supported_io_types": { 00:11:07.405 "read": true, 00:11:07.405 "write": true, 00:11:07.405 "unmap": true, 00:11:07.405 "flush": true, 00:11:07.405 "reset": true, 00:11:07.405 "nvme_admin": false, 00:11:07.405 "nvme_io": false, 00:11:07.405 "nvme_io_md": false, 00:11:07.405 "write_zeroes": true, 00:11:07.405 "zcopy": true, 00:11:07.405 "get_zone_info": false, 00:11:07.405 "zone_management": false, 00:11:07.405 "zone_append": false, 00:11:07.405 "compare": false, 00:11:07.405 "compare_and_write": false, 00:11:07.405 "abort": true, 00:11:07.405 "seek_hole": false, 00:11:07.405 "seek_data": false, 00:11:07.405 "copy": true, 00:11:07.405 "nvme_iov_md": false 00:11:07.405 }, 00:11:07.405 "memory_domains": [ 00:11:07.405 { 00:11:07.405 "dma_device_id": "system", 00:11:07.405 "dma_device_type": 1 00:11:07.405 }, 00:11:07.405 { 00:11:07.405 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.405 "dma_device_type": 2 00:11:07.405 } 00:11:07.405 ], 00:11:07.405 "driver_specific": {} 00:11:07.405 }' 00:11:07.405 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:07.665 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:07.665 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:07.665 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:07.665 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:07.665 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:07.665 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:07.665 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:07.665 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:07.665 21:10:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:07.665 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:07.665 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:07.665 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:07.665 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:07.665 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:07.923 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:07.923 "name": "BaseBdev2", 00:11:07.923 "aliases": [ 00:11:07.923 "73195db3-4225-11ef-aa83-81fbc7dfef58" 00:11:07.923 ], 00:11:07.923 "product_name": "Malloc disk", 00:11:07.923 "block_size": 512, 00:11:07.923 "num_blocks": 65536, 00:11:07.923 "uuid": "73195db3-4225-11ef-aa83-81fbc7dfef58", 00:11:07.923 "assigned_rate_limits": { 00:11:07.923 "rw_ios_per_sec": 0, 00:11:07.923 "rw_mbytes_per_sec": 0, 00:11:07.923 "r_mbytes_per_sec": 0, 00:11:07.924 "w_mbytes_per_sec": 0 00:11:07.924 }, 00:11:07.924 "claimed": true, 00:11:07.924 "claim_type": "exclusive_write", 00:11:07.924 "zoned": false, 00:11:07.924 "supported_io_types": { 00:11:07.924 "read": true, 00:11:07.924 "write": true, 00:11:07.924 "unmap": true, 00:11:07.924 "flush": true, 00:11:07.924 "reset": true, 00:11:07.924 "nvme_admin": false, 00:11:07.924 "nvme_io": false, 00:11:07.924 "nvme_io_md": false, 00:11:07.924 "write_zeroes": true, 00:11:07.924 "zcopy": true, 00:11:07.924 "get_zone_info": false, 00:11:07.924 "zone_management": false, 00:11:07.924 "zone_append": false, 00:11:07.924 "compare": false, 00:11:07.924 "compare_and_write": false, 00:11:07.924 "abort": true, 00:11:07.924 "seek_hole": false, 00:11:07.924 "seek_data": false, 00:11:07.924 "copy": true, 00:11:07.924 "nvme_iov_md": false 00:11:07.924 }, 00:11:07.924 "memory_domains": [ 00:11:07.924 { 00:11:07.924 "dma_device_id": "system", 00:11:07.924 "dma_device_type": 1 00:11:07.924 }, 00:11:07.924 { 00:11:07.924 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:07.924 "dma_device_type": 2 00:11:07.924 } 00:11:07.924 ], 00:11:07.924 "driver_specific": {} 00:11:07.924 }' 00:11:07.924 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:07.924 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:07.924 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:07.924 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:07.924 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:07.924 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:07.924 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:07.924 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:07.924 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:07.924 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:07.924 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:07.924 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:07.924 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:07.924 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:07.924 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:08.183 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:08.183 "name": "BaseBdev3", 00:11:08.183 "aliases": [ 00:11:08.183 "738df22a-4225-11ef-aa83-81fbc7dfef58" 00:11:08.183 ], 00:11:08.183 "product_name": "Malloc disk", 00:11:08.183 "block_size": 512, 00:11:08.183 "num_blocks": 65536, 00:11:08.183 "uuid": "738df22a-4225-11ef-aa83-81fbc7dfef58", 00:11:08.183 "assigned_rate_limits": { 00:11:08.183 "rw_ios_per_sec": 0, 00:11:08.183 "rw_mbytes_per_sec": 0, 00:11:08.183 "r_mbytes_per_sec": 0, 00:11:08.183 "w_mbytes_per_sec": 0 00:11:08.183 }, 00:11:08.183 "claimed": true, 00:11:08.183 "claim_type": "exclusive_write", 00:11:08.183 "zoned": false, 00:11:08.183 "supported_io_types": { 00:11:08.183 "read": true, 00:11:08.183 "write": true, 00:11:08.183 "unmap": true, 00:11:08.183 "flush": true, 00:11:08.183 "reset": true, 00:11:08.183 "nvme_admin": false, 00:11:08.183 "nvme_io": false, 00:11:08.183 "nvme_io_md": false, 00:11:08.183 "write_zeroes": true, 00:11:08.183 "zcopy": true, 00:11:08.183 "get_zone_info": false, 00:11:08.183 "zone_management": false, 00:11:08.183 "zone_append": false, 00:11:08.183 "compare": false, 00:11:08.183 "compare_and_write": false, 00:11:08.183 "abort": true, 00:11:08.183 "seek_hole": false, 00:11:08.183 "seek_data": false, 00:11:08.183 "copy": true, 00:11:08.183 "nvme_iov_md": false 00:11:08.183 }, 00:11:08.183 "memory_domains": [ 00:11:08.183 { 00:11:08.183 "dma_device_id": "system", 00:11:08.183 "dma_device_type": 1 00:11:08.183 }, 00:11:08.183 { 00:11:08.183 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:08.183 "dma_device_type": 2 00:11:08.183 } 00:11:08.183 ], 00:11:08.183 "driver_specific": {} 00:11:08.183 }' 00:11:08.183 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:08.183 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:08.183 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:08.183 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:08.183 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:08.183 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:08.183 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:08.183 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:08.183 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:08.183 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:08.183 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:08.183 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:08.183 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:08.441 [2024-07-14 21:10:19.914428] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:08.441 [2024-07-14 21:10:19.914464] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:08.442 [2024-07-14 21:10:19.914502] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:08.442 [2024-07-14 21:10:19.914531] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:08.442 [2024-07-14 21:10:19.914535] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x904d3034a00 name Existed_Raid, state offline 00:11:08.442 21:10:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 54708 00:11:08.442 21:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 54708 ']' 00:11:08.442 21:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 54708 00:11:08.442 21:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:11:08.442 21:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:08.442 21:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 54708 00:11:08.442 21:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:11:08.442 21:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:11:08.442 21:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:11:08.442 killing process with pid 54708 00:11:08.442 21:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 54708' 00:11:08.442 21:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 54708 00:11:08.442 [2024-07-14 21:10:19.941237] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:08.442 21:10:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 54708 00:11:08.442 [2024-07-14 21:10:19.959064] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:08.700 21:10:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:11:08.700 00:11:08.700 real 0m22.510s 00:11:08.700 user 0m41.032s 00:11:08.700 sys 0m3.166s 00:11:08.700 21:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:08.700 21:10:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:08.700 ************************************ 00:11:08.700 END TEST raid_state_function_test_sb 00:11:08.700 ************************************ 00:11:08.700 21:10:20 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:08.700 21:10:20 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 3 00:11:08.700 21:10:20 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:08.700 21:10:20 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:08.700 21:10:20 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:08.700 ************************************ 00:11:08.700 START TEST raid_superblock_test 00:11:08.700 ************************************ 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 3 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=55428 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 55428 /var/tmp/spdk-raid.sock 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 55428 ']' 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:08.700 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:08.700 21:10:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:08.700 [2024-07-14 21:10:20.216054] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:08.700 [2024-07-14 21:10:20.216310] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:09.267 EAL: TSC is not safe to use in SMP mode 00:11:09.267 EAL: TSC is not invariant 00:11:09.267 [2024-07-14 21:10:20.721147] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:09.267 [2024-07-14 21:10:20.812254] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:09.267 [2024-07-14 21:10:20.814469] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:09.526 [2024-07-14 21:10:20.815222] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.526 [2024-07-14 21:10:20.815237] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:09.784 21:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:09.784 21:10:21 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:11:09.784 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:11:09.784 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:09.784 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:11:09.784 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:11:09.784 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:11:09.785 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:09.785 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:09.785 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:09.785 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:11:10.043 malloc1 00:11:10.043 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:10.302 [2024-07-14 21:10:21.687922] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:10.302 [2024-07-14 21:10:21.688001] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.302 [2024-07-14 21:10:21.688029] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd582f834780 00:11:10.302 [2024-07-14 21:10:21.688036] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.302 [2024-07-14 21:10:21.688886] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.302 [2024-07-14 21:10:21.688926] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:10.302 pt1 00:11:10.302 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:10.302 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:10.302 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:11:10.302 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:11:10.302 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:11:10.302 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:10.302 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:10.302 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:10.302 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:11:10.561 malloc2 00:11:10.561 21:10:21 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:10.819 [2024-07-14 21:10:22.147931] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:10.819 [2024-07-14 21:10:22.148028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:10.819 [2024-07-14 21:10:22.148055] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd582f834c80 00:11:10.819 [2024-07-14 21:10:22.148063] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:10.819 [2024-07-14 21:10:22.148790] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:10.819 [2024-07-14 21:10:22.148829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:10.819 pt2 00:11:10.819 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:10.819 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:10.819 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:11:10.819 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:11:10.819 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:11:10.819 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:11:10.820 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:11:10.820 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:11:10.820 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:11:11.078 malloc3 00:11:11.078 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:11.078 [2024-07-14 21:10:22.579929] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:11.078 [2024-07-14 21:10:22.580004] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:11.078 [2024-07-14 21:10:22.580031] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd582f835180 00:11:11.078 [2024-07-14 21:10:22.580038] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:11.078 [2024-07-14 21:10:22.580650] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:11.078 [2024-07-14 21:10:22.580673] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:11.078 pt3 00:11:11.078 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:11:11.078 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:11:11.078 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:11:11.337 [2024-07-14 21:10:22.831932] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:11.337 [2024-07-14 21:10:22.832450] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:11.337 [2024-07-14 21:10:22.832485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:11.337 [2024-07-14 21:10:22.832530] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd582f835400 00:11:11.337 [2024-07-14 21:10:22.832535] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:11.337 [2024-07-14 21:10:22.832561] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd582f897e20 00:11:11.337 [2024-07-14 21:10:22.832626] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd582f835400 00:11:11.337 [2024-07-14 21:10:22.832630] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd582f835400 00:11:11.337 [2024-07-14 21:10:22.832655] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:11.337 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:11.337 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:11.337 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:11.337 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:11.337 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:11.337 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:11.337 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:11.337 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:11.337 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:11.337 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:11.337 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:11.337 21:10:22 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:11.596 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:11.596 "name": "raid_bdev1", 00:11:11.596 "uuid": "7bb2e42d-4225-11ef-aa83-81fbc7dfef58", 00:11:11.596 "strip_size_kb": 64, 00:11:11.596 "state": "online", 00:11:11.596 "raid_level": "concat", 00:11:11.596 "superblock": true, 00:11:11.596 "num_base_bdevs": 3, 00:11:11.596 "num_base_bdevs_discovered": 3, 00:11:11.596 "num_base_bdevs_operational": 3, 00:11:11.596 "base_bdevs_list": [ 00:11:11.596 { 00:11:11.596 "name": "pt1", 00:11:11.596 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:11.596 "is_configured": true, 00:11:11.596 "data_offset": 2048, 00:11:11.596 "data_size": 63488 00:11:11.596 }, 00:11:11.596 { 00:11:11.596 "name": "pt2", 00:11:11.596 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:11.596 "is_configured": true, 00:11:11.596 "data_offset": 2048, 00:11:11.596 "data_size": 63488 00:11:11.596 }, 00:11:11.596 { 00:11:11.596 "name": "pt3", 00:11:11.596 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:11.596 "is_configured": true, 00:11:11.596 "data_offset": 2048, 00:11:11.596 "data_size": 63488 00:11:11.596 } 00:11:11.596 ] 00:11:11.596 }' 00:11:11.596 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:11.596 21:10:23 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:12.177 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:11:12.178 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:12.178 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:12.178 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:12.178 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:12.178 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:12.178 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:12.178 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:12.178 [2024-07-14 21:10:23.688058] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:12.178 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:12.178 "name": "raid_bdev1", 00:11:12.178 "aliases": [ 00:11:12.178 "7bb2e42d-4225-11ef-aa83-81fbc7dfef58" 00:11:12.178 ], 00:11:12.178 "product_name": "Raid Volume", 00:11:12.178 "block_size": 512, 00:11:12.178 "num_blocks": 190464, 00:11:12.178 "uuid": "7bb2e42d-4225-11ef-aa83-81fbc7dfef58", 00:11:12.178 "assigned_rate_limits": { 00:11:12.178 "rw_ios_per_sec": 0, 00:11:12.178 "rw_mbytes_per_sec": 0, 00:11:12.178 "r_mbytes_per_sec": 0, 00:11:12.178 "w_mbytes_per_sec": 0 00:11:12.178 }, 00:11:12.178 "claimed": false, 00:11:12.178 "zoned": false, 00:11:12.178 "supported_io_types": { 00:11:12.178 "read": true, 00:11:12.178 "write": true, 00:11:12.178 "unmap": true, 00:11:12.178 "flush": true, 00:11:12.178 "reset": true, 00:11:12.178 "nvme_admin": false, 00:11:12.178 "nvme_io": false, 00:11:12.178 "nvme_io_md": false, 00:11:12.178 "write_zeroes": true, 00:11:12.178 "zcopy": false, 00:11:12.178 "get_zone_info": false, 00:11:12.178 "zone_management": false, 00:11:12.178 "zone_append": false, 00:11:12.178 "compare": false, 00:11:12.178 "compare_and_write": false, 00:11:12.178 "abort": false, 00:11:12.178 "seek_hole": false, 00:11:12.178 "seek_data": false, 00:11:12.178 "copy": false, 00:11:12.178 "nvme_iov_md": false 00:11:12.178 }, 00:11:12.178 "memory_domains": [ 00:11:12.178 { 00:11:12.178 "dma_device_id": "system", 00:11:12.178 "dma_device_type": 1 00:11:12.178 }, 00:11:12.178 { 00:11:12.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.178 "dma_device_type": 2 00:11:12.178 }, 00:11:12.178 { 00:11:12.178 "dma_device_id": "system", 00:11:12.178 "dma_device_type": 1 00:11:12.178 }, 00:11:12.178 { 00:11:12.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.178 "dma_device_type": 2 00:11:12.178 }, 00:11:12.178 { 00:11:12.178 "dma_device_id": "system", 00:11:12.178 "dma_device_type": 1 00:11:12.178 }, 00:11:12.178 { 00:11:12.178 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.178 "dma_device_type": 2 00:11:12.178 } 00:11:12.178 ], 00:11:12.178 "driver_specific": { 00:11:12.178 "raid": { 00:11:12.178 "uuid": "7bb2e42d-4225-11ef-aa83-81fbc7dfef58", 00:11:12.178 "strip_size_kb": 64, 00:11:12.178 "state": "online", 00:11:12.178 "raid_level": "concat", 00:11:12.178 "superblock": true, 00:11:12.178 "num_base_bdevs": 3, 00:11:12.178 "num_base_bdevs_discovered": 3, 00:11:12.178 "num_base_bdevs_operational": 3, 00:11:12.178 "base_bdevs_list": [ 00:11:12.178 { 00:11:12.178 "name": "pt1", 00:11:12.178 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:12.178 "is_configured": true, 00:11:12.178 "data_offset": 2048, 00:11:12.178 "data_size": 63488 00:11:12.178 }, 00:11:12.178 { 00:11:12.178 "name": "pt2", 00:11:12.178 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:12.178 "is_configured": true, 00:11:12.178 "data_offset": 2048, 00:11:12.178 "data_size": 63488 00:11:12.178 }, 00:11:12.178 { 00:11:12.178 "name": "pt3", 00:11:12.178 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:12.178 "is_configured": true, 00:11:12.178 "data_offset": 2048, 00:11:12.178 "data_size": 63488 00:11:12.178 } 00:11:12.178 ] 00:11:12.178 } 00:11:12.178 } 00:11:12.178 }' 00:11:12.178 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:12.480 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:12.480 pt2 00:11:12.480 pt3' 00:11:12.480 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:12.480 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:12.480 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:12.480 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:12.480 "name": "pt1", 00:11:12.480 "aliases": [ 00:11:12.480 "00000000-0000-0000-0000-000000000001" 00:11:12.480 ], 00:11:12.480 "product_name": "passthru", 00:11:12.480 "block_size": 512, 00:11:12.480 "num_blocks": 65536, 00:11:12.480 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:12.480 "assigned_rate_limits": { 00:11:12.480 "rw_ios_per_sec": 0, 00:11:12.480 "rw_mbytes_per_sec": 0, 00:11:12.480 "r_mbytes_per_sec": 0, 00:11:12.480 "w_mbytes_per_sec": 0 00:11:12.480 }, 00:11:12.480 "claimed": true, 00:11:12.480 "claim_type": "exclusive_write", 00:11:12.480 "zoned": false, 00:11:12.480 "supported_io_types": { 00:11:12.480 "read": true, 00:11:12.480 "write": true, 00:11:12.480 "unmap": true, 00:11:12.480 "flush": true, 00:11:12.480 "reset": true, 00:11:12.480 "nvme_admin": false, 00:11:12.480 "nvme_io": false, 00:11:12.480 "nvme_io_md": false, 00:11:12.480 "write_zeroes": true, 00:11:12.480 "zcopy": true, 00:11:12.480 "get_zone_info": false, 00:11:12.480 "zone_management": false, 00:11:12.480 "zone_append": false, 00:11:12.480 "compare": false, 00:11:12.480 "compare_and_write": false, 00:11:12.480 "abort": true, 00:11:12.480 "seek_hole": false, 00:11:12.480 "seek_data": false, 00:11:12.480 "copy": true, 00:11:12.480 "nvme_iov_md": false 00:11:12.480 }, 00:11:12.480 "memory_domains": [ 00:11:12.480 { 00:11:12.480 "dma_device_id": "system", 00:11:12.480 "dma_device_type": 1 00:11:12.480 }, 00:11:12.480 { 00:11:12.480 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:12.480 "dma_device_type": 2 00:11:12.480 } 00:11:12.480 ], 00:11:12.480 "driver_specific": { 00:11:12.480 "passthru": { 00:11:12.480 "name": "pt1", 00:11:12.480 "base_bdev_name": "malloc1" 00:11:12.480 } 00:11:12.480 } 00:11:12.480 }' 00:11:12.480 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.480 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:12.480 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:12.480 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.480 21:10:23 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:12.480 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:12.480 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.480 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:12.480 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:12.480 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.745 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:12.745 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:12.745 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:12.745 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:12.745 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:13.003 "name": "pt2", 00:11:13.003 "aliases": [ 00:11:13.003 "00000000-0000-0000-0000-000000000002" 00:11:13.003 ], 00:11:13.003 "product_name": "passthru", 00:11:13.003 "block_size": 512, 00:11:13.003 "num_blocks": 65536, 00:11:13.003 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:13.003 "assigned_rate_limits": { 00:11:13.003 "rw_ios_per_sec": 0, 00:11:13.003 "rw_mbytes_per_sec": 0, 00:11:13.003 "r_mbytes_per_sec": 0, 00:11:13.003 "w_mbytes_per_sec": 0 00:11:13.003 }, 00:11:13.003 "claimed": true, 00:11:13.003 "claim_type": "exclusive_write", 00:11:13.003 "zoned": false, 00:11:13.003 "supported_io_types": { 00:11:13.003 "read": true, 00:11:13.003 "write": true, 00:11:13.003 "unmap": true, 00:11:13.003 "flush": true, 00:11:13.003 "reset": true, 00:11:13.003 "nvme_admin": false, 00:11:13.003 "nvme_io": false, 00:11:13.003 "nvme_io_md": false, 00:11:13.003 "write_zeroes": true, 00:11:13.003 "zcopy": true, 00:11:13.003 "get_zone_info": false, 00:11:13.003 "zone_management": false, 00:11:13.003 "zone_append": false, 00:11:13.003 "compare": false, 00:11:13.003 "compare_and_write": false, 00:11:13.003 "abort": true, 00:11:13.003 "seek_hole": false, 00:11:13.003 "seek_data": false, 00:11:13.003 "copy": true, 00:11:13.003 "nvme_iov_md": false 00:11:13.003 }, 00:11:13.003 "memory_domains": [ 00:11:13.003 { 00:11:13.003 "dma_device_id": "system", 00:11:13.003 "dma_device_type": 1 00:11:13.003 }, 00:11:13.003 { 00:11:13.003 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.003 "dma_device_type": 2 00:11:13.003 } 00:11:13.003 ], 00:11:13.003 "driver_specific": { 00:11:13.003 "passthru": { 00:11:13.003 "name": "pt2", 00:11:13.003 "base_bdev_name": "malloc2" 00:11:13.003 } 00:11:13.003 } 00:11:13.003 }' 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:13.003 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:13.261 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:13.261 "name": "pt3", 00:11:13.261 "aliases": [ 00:11:13.261 "00000000-0000-0000-0000-000000000003" 00:11:13.261 ], 00:11:13.261 "product_name": "passthru", 00:11:13.261 "block_size": 512, 00:11:13.261 "num_blocks": 65536, 00:11:13.261 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:13.261 "assigned_rate_limits": { 00:11:13.261 "rw_ios_per_sec": 0, 00:11:13.261 "rw_mbytes_per_sec": 0, 00:11:13.261 "r_mbytes_per_sec": 0, 00:11:13.261 "w_mbytes_per_sec": 0 00:11:13.261 }, 00:11:13.261 "claimed": true, 00:11:13.261 "claim_type": "exclusive_write", 00:11:13.261 "zoned": false, 00:11:13.261 "supported_io_types": { 00:11:13.261 "read": true, 00:11:13.261 "write": true, 00:11:13.261 "unmap": true, 00:11:13.261 "flush": true, 00:11:13.261 "reset": true, 00:11:13.261 "nvme_admin": false, 00:11:13.261 "nvme_io": false, 00:11:13.261 "nvme_io_md": false, 00:11:13.261 "write_zeroes": true, 00:11:13.261 "zcopy": true, 00:11:13.261 "get_zone_info": false, 00:11:13.261 "zone_management": false, 00:11:13.261 "zone_append": false, 00:11:13.261 "compare": false, 00:11:13.261 "compare_and_write": false, 00:11:13.261 "abort": true, 00:11:13.261 "seek_hole": false, 00:11:13.261 "seek_data": false, 00:11:13.261 "copy": true, 00:11:13.261 "nvme_iov_md": false 00:11:13.261 }, 00:11:13.261 "memory_domains": [ 00:11:13.262 { 00:11:13.262 "dma_device_id": "system", 00:11:13.262 "dma_device_type": 1 00:11:13.262 }, 00:11:13.262 { 00:11:13.262 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:13.262 "dma_device_type": 2 00:11:13.262 } 00:11:13.262 ], 00:11:13.262 "driver_specific": { 00:11:13.262 "passthru": { 00:11:13.262 "name": "pt3", 00:11:13.262 "base_bdev_name": "malloc3" 00:11:13.262 } 00:11:13.262 } 00:11:13.262 }' 00:11:13.262 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.262 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:13.262 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:13.262 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:13.262 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:13.262 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:13.262 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:13.262 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:13.262 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:13.262 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:13.262 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:13.262 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:13.262 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:13.262 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:11:13.520 [2024-07-14 21:10:24.964100] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:13.520 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=7bb2e42d-4225-11ef-aa83-81fbc7dfef58 00:11:13.520 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 7bb2e42d-4225-11ef-aa83-81fbc7dfef58 ']' 00:11:13.520 21:10:24 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:13.778 [2024-07-14 21:10:25.244068] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:13.778 [2024-07-14 21:10:25.244087] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:13.778 [2024-07-14 21:10:25.244110] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:13.778 [2024-07-14 21:10:25.244125] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:13.778 [2024-07-14 21:10:25.244130] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd582f835400 name raid_bdev1, state offline 00:11:13.778 21:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:13.778 21:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:11:14.036 21:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:11:14.036 21:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:11:14.036 21:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.036 21:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:11:14.295 21:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.295 21:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:14.553 21:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:11:14.553 21:10:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:11:14.811 21:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:11:14.811 21:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:11:15.070 21:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:11:15.070 21:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:15.070 21:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:11:15.070 21:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:15.070 21:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:15.070 21:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:15.070 21:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:15.070 21:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:15.070 21:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:15.070 21:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:11:15.070 21:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:11:15.070 21:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:11:15.070 21:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:11:15.328 [2024-07-14 21:10:26.628123] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:11:15.328 [2024-07-14 21:10:26.628931] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:11:15.328 [2024-07-14 21:10:26.628964] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:11:15.328 [2024-07-14 21:10:26.628978] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:11:15.328 [2024-07-14 21:10:26.629026] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:11:15.328 [2024-07-14 21:10:26.629055] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:11:15.328 [2024-07-14 21:10:26.629063] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:15.328 [2024-07-14 21:10:26.629067] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd582f835180 name raid_bdev1, state configuring 00:11:15.328 request: 00:11:15.328 { 00:11:15.328 "name": "raid_bdev1", 00:11:15.328 "raid_level": "concat", 00:11:15.328 "base_bdevs": [ 00:11:15.328 "malloc1", 00:11:15.328 "malloc2", 00:11:15.328 "malloc3" 00:11:15.328 ], 00:11:15.328 "strip_size_kb": 64, 00:11:15.328 "superblock": false, 00:11:15.328 "method": "bdev_raid_create", 00:11:15.328 "req_id": 1 00:11:15.328 } 00:11:15.328 Got JSON-RPC error response 00:11:15.328 response: 00:11:15.328 { 00:11:15.328 "code": -17, 00:11:15.328 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:11:15.328 } 00:11:15.328 21:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:11:15.328 21:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:11:15.328 21:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:11:15.328 21:10:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:11:15.328 21:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.328 21:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:11:15.328 21:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:11:15.328 21:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:11:15.328 21:10:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:11:15.586 [2024-07-14 21:10:27.112110] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:11:15.586 [2024-07-14 21:10:27.112167] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:15.586 [2024-07-14 21:10:27.112194] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd582f834c80 00:11:15.586 [2024-07-14 21:10:27.112201] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:15.586 [2024-07-14 21:10:27.112912] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:15.586 [2024-07-14 21:10:27.112938] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:11:15.587 [2024-07-14 21:10:27.112961] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:11:15.587 [2024-07-14 21:10:27.112973] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:11:15.587 pt1 00:11:15.587 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:15.587 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:15.587 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:15.587 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:15.587 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:15.587 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:15.587 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:15.587 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:15.587 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:15.587 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:15.587 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:15.587 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:15.845 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:15.845 "name": "raid_bdev1", 00:11:15.845 "uuid": "7bb2e42d-4225-11ef-aa83-81fbc7dfef58", 00:11:15.845 "strip_size_kb": 64, 00:11:15.845 "state": "configuring", 00:11:15.845 "raid_level": "concat", 00:11:15.845 "superblock": true, 00:11:15.845 "num_base_bdevs": 3, 00:11:15.845 "num_base_bdevs_discovered": 1, 00:11:15.845 "num_base_bdevs_operational": 3, 00:11:15.845 "base_bdevs_list": [ 00:11:15.845 { 00:11:15.845 "name": "pt1", 00:11:15.845 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:15.845 "is_configured": true, 00:11:15.845 "data_offset": 2048, 00:11:15.845 "data_size": 63488 00:11:15.845 }, 00:11:15.845 { 00:11:15.845 "name": null, 00:11:15.845 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:15.845 "is_configured": false, 00:11:15.845 "data_offset": 2048, 00:11:15.845 "data_size": 63488 00:11:15.845 }, 00:11:15.845 { 00:11:15.845 "name": null, 00:11:15.845 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:15.845 "is_configured": false, 00:11:15.845 "data_offset": 2048, 00:11:15.845 "data_size": 63488 00:11:15.845 } 00:11:15.845 ] 00:11:15.845 }' 00:11:15.845 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:15.845 21:10:27 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:16.411 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:11:16.411 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:16.411 [2024-07-14 21:10:27.904122] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:16.411 [2024-07-14 21:10:27.904178] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:16.411 [2024-07-14 21:10:27.904205] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd582f835680 00:11:16.411 [2024-07-14 21:10:27.904212] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:16.411 [2024-07-14 21:10:27.904341] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:16.411 [2024-07-14 21:10:27.904367] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:16.411 [2024-07-14 21:10:27.904405] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:16.411 [2024-07-14 21:10:27.904414] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:16.411 pt2 00:11:16.411 21:10:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:11:16.669 [2024-07-14 21:10:28.168135] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:11:16.669 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 3 00:11:16.669 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:16.669 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:16.669 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:16.669 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:16.669 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:16.669 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:16.669 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:16.669 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:16.669 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:16.669 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:16.669 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:16.927 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:16.927 "name": "raid_bdev1", 00:11:16.927 "uuid": "7bb2e42d-4225-11ef-aa83-81fbc7dfef58", 00:11:16.927 "strip_size_kb": 64, 00:11:16.927 "state": "configuring", 00:11:16.927 "raid_level": "concat", 00:11:16.927 "superblock": true, 00:11:16.927 "num_base_bdevs": 3, 00:11:16.927 "num_base_bdevs_discovered": 1, 00:11:16.927 "num_base_bdevs_operational": 3, 00:11:16.927 "base_bdevs_list": [ 00:11:16.927 { 00:11:16.927 "name": "pt1", 00:11:16.927 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:16.927 "is_configured": true, 00:11:16.927 "data_offset": 2048, 00:11:16.927 "data_size": 63488 00:11:16.927 }, 00:11:16.927 { 00:11:16.927 "name": null, 00:11:16.927 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:16.927 "is_configured": false, 00:11:16.927 "data_offset": 2048, 00:11:16.927 "data_size": 63488 00:11:16.927 }, 00:11:16.927 { 00:11:16.927 "name": null, 00:11:16.927 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:16.927 "is_configured": false, 00:11:16.927 "data_offset": 2048, 00:11:16.927 "data_size": 63488 00:11:16.927 } 00:11:16.927 ] 00:11:16.927 }' 00:11:16.927 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:16.927 21:10:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:17.492 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:11:17.492 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:17.492 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:11:17.492 [2024-07-14 21:10:28.968165] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:11:17.492 [2024-07-14 21:10:28.968210] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.492 [2024-07-14 21:10:28.968221] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd582f835680 00:11:17.492 [2024-07-14 21:10:28.968229] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.492 [2024-07-14 21:10:28.968365] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.492 [2024-07-14 21:10:28.968376] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:11:17.492 [2024-07-14 21:10:28.968400] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:11:17.492 [2024-07-14 21:10:28.968408] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:11:17.492 pt2 00:11:17.492 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:17.492 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:17.492 21:10:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:11:17.751 [2024-07-14 21:10:29.192160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:11:17.751 [2024-07-14 21:10:29.192212] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:17.751 [2024-07-14 21:10:29.192222] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xd582f835400 00:11:17.751 [2024-07-14 21:10:29.192230] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:17.751 [2024-07-14 21:10:29.192419] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:17.751 [2024-07-14 21:10:29.192429] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:11:17.751 [2024-07-14 21:10:29.192447] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:11:17.751 [2024-07-14 21:10:29.192470] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:11:17.751 [2024-07-14 21:10:29.192495] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xd582f834780 00:11:17.751 [2024-07-14 21:10:29.192500] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:17.751 [2024-07-14 21:10:29.192519] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xd582f897e20 00:11:17.751 [2024-07-14 21:10:29.192604] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xd582f834780 00:11:17.751 [2024-07-14 21:10:29.192609] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xd582f834780 00:11:17.751 [2024-07-14 21:10:29.192629] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:17.751 pt3 00:11:17.751 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:11:17.751 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:11:17.751 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:17.751 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:17.751 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:17.751 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:17.751 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:17.751 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:17.751 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:17.751 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:17.751 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:17.751 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:17.751 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:17.751 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:18.009 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:18.009 "name": "raid_bdev1", 00:11:18.009 "uuid": "7bb2e42d-4225-11ef-aa83-81fbc7dfef58", 00:11:18.009 "strip_size_kb": 64, 00:11:18.009 "state": "online", 00:11:18.009 "raid_level": "concat", 00:11:18.009 "superblock": true, 00:11:18.009 "num_base_bdevs": 3, 00:11:18.009 "num_base_bdevs_discovered": 3, 00:11:18.009 "num_base_bdevs_operational": 3, 00:11:18.009 "base_bdevs_list": [ 00:11:18.009 { 00:11:18.009 "name": "pt1", 00:11:18.009 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.009 "is_configured": true, 00:11:18.009 "data_offset": 2048, 00:11:18.009 "data_size": 63488 00:11:18.009 }, 00:11:18.009 { 00:11:18.009 "name": "pt2", 00:11:18.009 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.009 "is_configured": true, 00:11:18.009 "data_offset": 2048, 00:11:18.009 "data_size": 63488 00:11:18.009 }, 00:11:18.009 { 00:11:18.009 "name": "pt3", 00:11:18.009 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.009 "is_configured": true, 00:11:18.009 "data_offset": 2048, 00:11:18.009 "data_size": 63488 00:11:18.009 } 00:11:18.009 ] 00:11:18.009 }' 00:11:18.009 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:18.009 21:10:29 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:18.268 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:11:18.269 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:11:18.269 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:18.269 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:18.269 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:18.269 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:18.269 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:18.269 21:10:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:18.526 [2024-07-14 21:10:29.988218] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:18.526 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:18.526 "name": "raid_bdev1", 00:11:18.526 "aliases": [ 00:11:18.526 "7bb2e42d-4225-11ef-aa83-81fbc7dfef58" 00:11:18.526 ], 00:11:18.526 "product_name": "Raid Volume", 00:11:18.526 "block_size": 512, 00:11:18.526 "num_blocks": 190464, 00:11:18.526 "uuid": "7bb2e42d-4225-11ef-aa83-81fbc7dfef58", 00:11:18.526 "assigned_rate_limits": { 00:11:18.526 "rw_ios_per_sec": 0, 00:11:18.526 "rw_mbytes_per_sec": 0, 00:11:18.526 "r_mbytes_per_sec": 0, 00:11:18.526 "w_mbytes_per_sec": 0 00:11:18.526 }, 00:11:18.526 "claimed": false, 00:11:18.526 "zoned": false, 00:11:18.526 "supported_io_types": { 00:11:18.526 "read": true, 00:11:18.526 "write": true, 00:11:18.526 "unmap": true, 00:11:18.526 "flush": true, 00:11:18.526 "reset": true, 00:11:18.526 "nvme_admin": false, 00:11:18.526 "nvme_io": false, 00:11:18.526 "nvme_io_md": false, 00:11:18.526 "write_zeroes": true, 00:11:18.526 "zcopy": false, 00:11:18.526 "get_zone_info": false, 00:11:18.526 "zone_management": false, 00:11:18.526 "zone_append": false, 00:11:18.526 "compare": false, 00:11:18.526 "compare_and_write": false, 00:11:18.526 "abort": false, 00:11:18.526 "seek_hole": false, 00:11:18.526 "seek_data": false, 00:11:18.526 "copy": false, 00:11:18.526 "nvme_iov_md": false 00:11:18.526 }, 00:11:18.526 "memory_domains": [ 00:11:18.526 { 00:11:18.526 "dma_device_id": "system", 00:11:18.526 "dma_device_type": 1 00:11:18.526 }, 00:11:18.526 { 00:11:18.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.526 "dma_device_type": 2 00:11:18.526 }, 00:11:18.526 { 00:11:18.526 "dma_device_id": "system", 00:11:18.526 "dma_device_type": 1 00:11:18.526 }, 00:11:18.526 { 00:11:18.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.526 "dma_device_type": 2 00:11:18.526 }, 00:11:18.526 { 00:11:18.526 "dma_device_id": "system", 00:11:18.526 "dma_device_type": 1 00:11:18.526 }, 00:11:18.526 { 00:11:18.526 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.526 "dma_device_type": 2 00:11:18.526 } 00:11:18.526 ], 00:11:18.526 "driver_specific": { 00:11:18.526 "raid": { 00:11:18.526 "uuid": "7bb2e42d-4225-11ef-aa83-81fbc7dfef58", 00:11:18.526 "strip_size_kb": 64, 00:11:18.526 "state": "online", 00:11:18.526 "raid_level": "concat", 00:11:18.526 "superblock": true, 00:11:18.526 "num_base_bdevs": 3, 00:11:18.526 "num_base_bdevs_discovered": 3, 00:11:18.526 "num_base_bdevs_operational": 3, 00:11:18.526 "base_bdevs_list": [ 00:11:18.526 { 00:11:18.526 "name": "pt1", 00:11:18.526 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.526 "is_configured": true, 00:11:18.526 "data_offset": 2048, 00:11:18.526 "data_size": 63488 00:11:18.526 }, 00:11:18.526 { 00:11:18.526 "name": "pt2", 00:11:18.526 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:18.526 "is_configured": true, 00:11:18.526 "data_offset": 2048, 00:11:18.526 "data_size": 63488 00:11:18.526 }, 00:11:18.526 { 00:11:18.526 "name": "pt3", 00:11:18.526 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:18.526 "is_configured": true, 00:11:18.526 "data_offset": 2048, 00:11:18.526 "data_size": 63488 00:11:18.526 } 00:11:18.526 ] 00:11:18.526 } 00:11:18.526 } 00:11:18.526 }' 00:11:18.526 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:18.526 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:11:18.526 pt2 00:11:18.526 pt3' 00:11:18.526 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:18.526 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:11:18.526 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:18.784 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:18.784 "name": "pt1", 00:11:18.784 "aliases": [ 00:11:18.784 "00000000-0000-0000-0000-000000000001" 00:11:18.784 ], 00:11:18.784 "product_name": "passthru", 00:11:18.784 "block_size": 512, 00:11:18.784 "num_blocks": 65536, 00:11:18.784 "uuid": "00000000-0000-0000-0000-000000000001", 00:11:18.784 "assigned_rate_limits": { 00:11:18.784 "rw_ios_per_sec": 0, 00:11:18.784 "rw_mbytes_per_sec": 0, 00:11:18.784 "r_mbytes_per_sec": 0, 00:11:18.784 "w_mbytes_per_sec": 0 00:11:18.784 }, 00:11:18.784 "claimed": true, 00:11:18.784 "claim_type": "exclusive_write", 00:11:18.784 "zoned": false, 00:11:18.784 "supported_io_types": { 00:11:18.784 "read": true, 00:11:18.784 "write": true, 00:11:18.784 "unmap": true, 00:11:18.785 "flush": true, 00:11:18.785 "reset": true, 00:11:18.785 "nvme_admin": false, 00:11:18.785 "nvme_io": false, 00:11:18.785 "nvme_io_md": false, 00:11:18.785 "write_zeroes": true, 00:11:18.785 "zcopy": true, 00:11:18.785 "get_zone_info": false, 00:11:18.785 "zone_management": false, 00:11:18.785 "zone_append": false, 00:11:18.785 "compare": false, 00:11:18.785 "compare_and_write": false, 00:11:18.785 "abort": true, 00:11:18.785 "seek_hole": false, 00:11:18.785 "seek_data": false, 00:11:18.785 "copy": true, 00:11:18.785 "nvme_iov_md": false 00:11:18.785 }, 00:11:18.785 "memory_domains": [ 00:11:18.785 { 00:11:18.785 "dma_device_id": "system", 00:11:18.785 "dma_device_type": 1 00:11:18.785 }, 00:11:18.785 { 00:11:18.785 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:18.785 "dma_device_type": 2 00:11:18.785 } 00:11:18.785 ], 00:11:18.785 "driver_specific": { 00:11:18.785 "passthru": { 00:11:18.785 "name": "pt1", 00:11:18.785 "base_bdev_name": "malloc1" 00:11:18.785 } 00:11:18.785 } 00:11:18.785 }' 00:11:18.785 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:18.785 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:18.785 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:18.785 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:18.785 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:18.785 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:18.785 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:18.785 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:18.785 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:18.785 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:18.785 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:18.785 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:18.785 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:18.785 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:11:18.785 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:19.043 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:19.043 "name": "pt2", 00:11:19.043 "aliases": [ 00:11:19.043 "00000000-0000-0000-0000-000000000002" 00:11:19.043 ], 00:11:19.043 "product_name": "passthru", 00:11:19.043 "block_size": 512, 00:11:19.043 "num_blocks": 65536, 00:11:19.043 "uuid": "00000000-0000-0000-0000-000000000002", 00:11:19.043 "assigned_rate_limits": { 00:11:19.043 "rw_ios_per_sec": 0, 00:11:19.043 "rw_mbytes_per_sec": 0, 00:11:19.043 "r_mbytes_per_sec": 0, 00:11:19.043 "w_mbytes_per_sec": 0 00:11:19.043 }, 00:11:19.043 "claimed": true, 00:11:19.043 "claim_type": "exclusive_write", 00:11:19.043 "zoned": false, 00:11:19.043 "supported_io_types": { 00:11:19.043 "read": true, 00:11:19.043 "write": true, 00:11:19.043 "unmap": true, 00:11:19.043 "flush": true, 00:11:19.043 "reset": true, 00:11:19.043 "nvme_admin": false, 00:11:19.043 "nvme_io": false, 00:11:19.043 "nvme_io_md": false, 00:11:19.043 "write_zeroes": true, 00:11:19.043 "zcopy": true, 00:11:19.043 "get_zone_info": false, 00:11:19.043 "zone_management": false, 00:11:19.043 "zone_append": false, 00:11:19.043 "compare": false, 00:11:19.043 "compare_and_write": false, 00:11:19.043 "abort": true, 00:11:19.043 "seek_hole": false, 00:11:19.043 "seek_data": false, 00:11:19.043 "copy": true, 00:11:19.043 "nvme_iov_md": false 00:11:19.043 }, 00:11:19.043 "memory_domains": [ 00:11:19.043 { 00:11:19.043 "dma_device_id": "system", 00:11:19.043 "dma_device_type": 1 00:11:19.043 }, 00:11:19.043 { 00:11:19.043 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.043 "dma_device_type": 2 00:11:19.043 } 00:11:19.043 ], 00:11:19.043 "driver_specific": { 00:11:19.043 "passthru": { 00:11:19.043 "name": "pt2", 00:11:19.043 "base_bdev_name": "malloc2" 00:11:19.043 } 00:11:19.043 } 00:11:19.043 }' 00:11:19.043 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:19.302 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:19.302 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:19.302 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:19.302 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:19.302 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:19.302 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:19.302 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:19.302 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:19.302 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:19.302 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:19.302 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:19.302 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:19.302 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:19.302 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:11:19.560 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:19.560 "name": "pt3", 00:11:19.560 "aliases": [ 00:11:19.560 "00000000-0000-0000-0000-000000000003" 00:11:19.560 ], 00:11:19.560 "product_name": "passthru", 00:11:19.560 "block_size": 512, 00:11:19.560 "num_blocks": 65536, 00:11:19.560 "uuid": "00000000-0000-0000-0000-000000000003", 00:11:19.560 "assigned_rate_limits": { 00:11:19.560 "rw_ios_per_sec": 0, 00:11:19.560 "rw_mbytes_per_sec": 0, 00:11:19.560 "r_mbytes_per_sec": 0, 00:11:19.560 "w_mbytes_per_sec": 0 00:11:19.560 }, 00:11:19.560 "claimed": true, 00:11:19.560 "claim_type": "exclusive_write", 00:11:19.560 "zoned": false, 00:11:19.560 "supported_io_types": { 00:11:19.560 "read": true, 00:11:19.560 "write": true, 00:11:19.560 "unmap": true, 00:11:19.560 "flush": true, 00:11:19.560 "reset": true, 00:11:19.560 "nvme_admin": false, 00:11:19.560 "nvme_io": false, 00:11:19.560 "nvme_io_md": false, 00:11:19.560 "write_zeroes": true, 00:11:19.560 "zcopy": true, 00:11:19.560 "get_zone_info": false, 00:11:19.560 "zone_management": false, 00:11:19.560 "zone_append": false, 00:11:19.560 "compare": false, 00:11:19.560 "compare_and_write": false, 00:11:19.560 "abort": true, 00:11:19.560 "seek_hole": false, 00:11:19.560 "seek_data": false, 00:11:19.560 "copy": true, 00:11:19.560 "nvme_iov_md": false 00:11:19.560 }, 00:11:19.560 "memory_domains": [ 00:11:19.560 { 00:11:19.560 "dma_device_id": "system", 00:11:19.560 "dma_device_type": 1 00:11:19.560 }, 00:11:19.560 { 00:11:19.560 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:19.560 "dma_device_type": 2 00:11:19.560 } 00:11:19.560 ], 00:11:19.560 "driver_specific": { 00:11:19.560 "passthru": { 00:11:19.560 "name": "pt3", 00:11:19.560 "base_bdev_name": "malloc3" 00:11:19.560 } 00:11:19.560 } 00:11:19.560 }' 00:11:19.560 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:19.560 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:19.560 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:19.560 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:19.560 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:19.560 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:19.560 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:19.560 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:19.560 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:19.560 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:19.560 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:19.560 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:19.560 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:11:19.560 21:10:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:11:19.819 [2024-07-14 21:10:31.204278] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 7bb2e42d-4225-11ef-aa83-81fbc7dfef58 '!=' 7bb2e42d-4225-11ef-aa83-81fbc7dfef58 ']' 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 55428 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 55428 ']' 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 55428 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 55428 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:11:19.819 killing process with pid 55428 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55428' 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 55428 00:11:19.819 [2024-07-14 21:10:31.235280] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:19.819 [2024-07-14 21:10:31.235301] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:19.819 [2024-07-14 21:10:31.235315] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:19.819 [2024-07-14 21:10:31.235319] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xd582f834780 name raid_bdev1, state offline 00:11:19.819 21:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 55428 00:11:19.819 [2024-07-14 21:10:31.253753] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:20.077 21:10:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:11:20.077 00:11:20.077 real 0m11.230s 00:11:20.077 user 0m19.787s 00:11:20.077 sys 0m1.886s 00:11:20.077 21:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:20.077 ************************************ 00:11:20.077 END TEST raid_superblock_test 00:11:20.077 ************************************ 00:11:20.077 21:10:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.077 21:10:31 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:20.077 21:10:31 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 3 read 00:11:20.077 21:10:31 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:20.077 21:10:31 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:20.077 21:10:31 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:20.077 ************************************ 00:11:20.077 START TEST raid_read_error_test 00:11:20.077 ************************************ 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 read 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:20.077 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.a9SfIoVJ7d 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=55779 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 55779 /var/tmp/spdk-raid.sock 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 55779 ']' 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:20.078 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:20.078 21:10:31 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:20.078 [2024-07-14 21:10:31.505537] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:20.078 [2024-07-14 21:10:31.505834] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:20.644 EAL: TSC is not safe to use in SMP mode 00:11:20.644 EAL: TSC is not invariant 00:11:20.644 [2024-07-14 21:10:32.043076] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:20.644 [2024-07-14 21:10:32.131416] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:20.644 [2024-07-14 21:10:32.133747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.644 [2024-07-14 21:10:32.134595] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:20.644 [2024-07-14 21:10:32.134606] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:21.210 21:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:21.210 21:10:32 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:21.210 21:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:21.210 21:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:21.210 BaseBdev1_malloc 00:11:21.210 21:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:21.467 true 00:11:21.467 21:10:32 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:21.725 [2024-07-14 21:10:33.163915] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:21.725 [2024-07-14 21:10:33.164013] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:21.725 [2024-07-14 21:10:33.164056] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f21b8834780 00:11:21.725 [2024-07-14 21:10:33.164064] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:21.725 [2024-07-14 21:10:33.164772] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:21.725 [2024-07-14 21:10:33.164799] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:21.725 BaseBdev1 00:11:21.725 21:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:21.725 21:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:21.982 BaseBdev2_malloc 00:11:21.983 21:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:22.240 true 00:11:22.240 21:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:22.499 [2024-07-14 21:10:33.847942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:22.499 [2024-07-14 21:10:33.848037] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:22.499 [2024-07-14 21:10:33.848074] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f21b8834c80 00:11:22.499 [2024-07-14 21:10:33.848081] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:22.499 [2024-07-14 21:10:33.848725] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:22.499 [2024-07-14 21:10:33.848751] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:22.499 BaseBdev2 00:11:22.499 21:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:22.499 21:10:33 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:22.756 BaseBdev3_malloc 00:11:22.756 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:23.014 true 00:11:23.014 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:23.014 [2024-07-14 21:10:34.531958] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:23.014 [2024-07-14 21:10:34.532060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:23.014 [2024-07-14 21:10:34.532097] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3f21b8835180 00:11:23.014 [2024-07-14 21:10:34.532105] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:23.014 [2024-07-14 21:10:34.532772] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:23.014 [2024-07-14 21:10:34.532798] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:23.014 BaseBdev3 00:11:23.014 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:23.273 [2024-07-14 21:10:34.748029] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:23.273 [2024-07-14 21:10:34.748625] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:23.273 [2024-07-14 21:10:34.748651] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:23.273 [2024-07-14 21:10:34.748702] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3f21b8835400 00:11:23.273 [2024-07-14 21:10:34.748708] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:23.273 [2024-07-14 21:10:34.748740] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3f21b88a0e20 00:11:23.273 [2024-07-14 21:10:34.748837] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3f21b8835400 00:11:23.273 [2024-07-14 21:10:34.748841] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3f21b8835400 00:11:23.273 [2024-07-14 21:10:34.748865] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:23.273 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:23.273 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:23.273 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:23.273 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:23.273 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:23.273 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:23.273 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:23.273 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:23.273 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:23.273 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:23.273 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:23.273 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:23.532 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:23.532 "name": "raid_bdev1", 00:11:23.532 "uuid": "82cd24ad-4225-11ef-aa83-81fbc7dfef58", 00:11:23.532 "strip_size_kb": 64, 00:11:23.532 "state": "online", 00:11:23.532 "raid_level": "concat", 00:11:23.532 "superblock": true, 00:11:23.532 "num_base_bdevs": 3, 00:11:23.532 "num_base_bdevs_discovered": 3, 00:11:23.532 "num_base_bdevs_operational": 3, 00:11:23.532 "base_bdevs_list": [ 00:11:23.532 { 00:11:23.532 "name": "BaseBdev1", 00:11:23.532 "uuid": "6c0eddca-39bf-1355-8574-fe5f2044ce9b", 00:11:23.532 "is_configured": true, 00:11:23.532 "data_offset": 2048, 00:11:23.532 "data_size": 63488 00:11:23.532 }, 00:11:23.532 { 00:11:23.532 "name": "BaseBdev2", 00:11:23.532 "uuid": "76d2f2af-9987-0357-b9fa-687bea812019", 00:11:23.532 "is_configured": true, 00:11:23.532 "data_offset": 2048, 00:11:23.532 "data_size": 63488 00:11:23.532 }, 00:11:23.532 { 00:11:23.532 "name": "BaseBdev3", 00:11:23.532 "uuid": "1cfa0e43-eeb9-ec57-b066-07abecd57c4f", 00:11:23.532 "is_configured": true, 00:11:23.532 "data_offset": 2048, 00:11:23.532 "data_size": 63488 00:11:23.532 } 00:11:23.532 ] 00:11:23.532 }' 00:11:23.532 21:10:34 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:23.532 21:10:34 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:23.861 21:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:23.861 21:10:35 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:23.861 [2024-07-14 21:10:35.384219] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3f21b88a0ec0 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:25.232 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:25.490 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:25.490 "name": "raid_bdev1", 00:11:25.490 "uuid": "82cd24ad-4225-11ef-aa83-81fbc7dfef58", 00:11:25.490 "strip_size_kb": 64, 00:11:25.490 "state": "online", 00:11:25.490 "raid_level": "concat", 00:11:25.490 "superblock": true, 00:11:25.490 "num_base_bdevs": 3, 00:11:25.490 "num_base_bdevs_discovered": 3, 00:11:25.490 "num_base_bdevs_operational": 3, 00:11:25.490 "base_bdevs_list": [ 00:11:25.490 { 00:11:25.490 "name": "BaseBdev1", 00:11:25.490 "uuid": "6c0eddca-39bf-1355-8574-fe5f2044ce9b", 00:11:25.490 "is_configured": true, 00:11:25.490 "data_offset": 2048, 00:11:25.490 "data_size": 63488 00:11:25.490 }, 00:11:25.490 { 00:11:25.490 "name": "BaseBdev2", 00:11:25.490 "uuid": "76d2f2af-9987-0357-b9fa-687bea812019", 00:11:25.490 "is_configured": true, 00:11:25.490 "data_offset": 2048, 00:11:25.490 "data_size": 63488 00:11:25.490 }, 00:11:25.490 { 00:11:25.490 "name": "BaseBdev3", 00:11:25.490 "uuid": "1cfa0e43-eeb9-ec57-b066-07abecd57c4f", 00:11:25.490 "is_configured": true, 00:11:25.490 "data_offset": 2048, 00:11:25.490 "data_size": 63488 00:11:25.490 } 00:11:25.490 ] 00:11:25.490 }' 00:11:25.490 21:10:36 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:25.490 21:10:36 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:25.749 21:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:26.008 [2024-07-14 21:10:37.466334] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:26.008 [2024-07-14 21:10:37.466358] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:26.008 [2024-07-14 21:10:37.466728] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:26.008 [2024-07-14 21:10:37.466739] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:26.008 [2024-07-14 21:10:37.466746] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:26.008 [2024-07-14 21:10:37.466750] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3f21b8835400 name raid_bdev1, state offline 00:11:26.008 0 00:11:26.008 21:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 55779 00:11:26.008 21:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 55779 ']' 00:11:26.008 21:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 55779 00:11:26.008 21:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:11:26.008 21:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:26.008 21:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 55779 00:11:26.008 21:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:11:26.008 21:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:11:26.008 21:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:11:26.008 killing process with pid 55779 00:11:26.008 21:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55779' 00:11:26.008 21:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 55779 00:11:26.008 [2024-07-14 21:10:37.494296] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:26.008 21:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 55779 00:11:26.008 [2024-07-14 21:10:37.513026] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:26.267 21:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.a9SfIoVJ7d 00:11:26.267 21:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:26.267 21:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:26.267 21:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.48 00:11:26.267 21:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:11:26.267 21:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:26.267 21:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:26.267 21:10:37 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.48 != \0\.\0\0 ]] 00:11:26.267 00:11:26.267 real 0m6.209s 00:11:26.267 user 0m9.553s 00:11:26.267 sys 0m1.107s 00:11:26.267 21:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:26.267 ************************************ 00:11:26.267 END TEST raid_read_error_test 00:11:26.267 ************************************ 00:11:26.267 21:10:37 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.267 21:10:37 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:26.267 21:10:37 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 3 write 00:11:26.267 21:10:37 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:26.267 21:10:37 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:26.267 21:10:37 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:26.267 ************************************ 00:11:26.267 START TEST raid_write_error_test 00:11:26.267 ************************************ 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 3 write 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.IH2vHYcMB8 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=55910 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 55910 /var/tmp/spdk-raid.sock 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 55910 ']' 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:26.267 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:26.267 21:10:37 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:26.267 [2024-07-14 21:10:37.764536] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:26.268 [2024-07-14 21:10:37.764724] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:26.835 EAL: TSC is not safe to use in SMP mode 00:11:26.835 EAL: TSC is not invariant 00:11:26.835 [2024-07-14 21:10:38.284168] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:26.835 [2024-07-14 21:10:38.371727] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:26.835 [2024-07-14 21:10:38.374113] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:26.835 [2024-07-14 21:10:38.375025] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:26.835 [2024-07-14 21:10:38.375040] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:27.402 21:10:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:27.402 21:10:38 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:11:27.402 21:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:27.402 21:10:38 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:11:27.661 BaseBdev1_malloc 00:11:27.661 21:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:11:27.919 true 00:11:27.919 21:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:11:28.178 [2024-07-14 21:10:39.504548] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:11:28.178 [2024-07-14 21:10:39.504610] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.178 [2024-07-14 21:10:39.504648] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x377fca034780 00:11:28.178 [2024-07-14 21:10:39.504656] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.178 [2024-07-14 21:10:39.505332] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.178 [2024-07-14 21:10:39.505371] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:11:28.178 BaseBdev1 00:11:28.178 21:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:28.178 21:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:11:28.437 BaseBdev2_malloc 00:11:28.437 21:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:11:28.437 true 00:11:28.695 21:10:39 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:11:28.955 [2024-07-14 21:10:40.244616] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:11:28.955 [2024-07-14 21:10:40.244710] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:28.955 [2024-07-14 21:10:40.244736] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x377fca034c80 00:11:28.955 [2024-07-14 21:10:40.244744] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:28.955 [2024-07-14 21:10:40.245446] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:28.955 [2024-07-14 21:10:40.245470] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:11:28.955 BaseBdev2 00:11:28.955 21:10:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:11:28.955 21:10:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:11:28.955 BaseBdev3_malloc 00:11:28.955 21:10:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:11:29.213 true 00:11:29.213 21:10:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:11:29.472 [2024-07-14 21:10:40.940611] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:11:29.472 [2024-07-14 21:10:40.940669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:11:29.472 [2024-07-14 21:10:40.940706] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x377fca035180 00:11:29.472 [2024-07-14 21:10:40.940722] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:11:29.472 [2024-07-14 21:10:40.941410] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:11:29.472 [2024-07-14 21:10:40.941435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:11:29.472 BaseBdev3 00:11:29.472 21:10:40 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:11:29.731 [2024-07-14 21:10:41.152617] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:29.731 [2024-07-14 21:10:41.153170] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:29.731 [2024-07-14 21:10:41.153209] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:29.731 [2024-07-14 21:10:41.153277] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x377fca035400 00:11:29.731 [2024-07-14 21:10:41.153283] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 190464, blocklen 512 00:11:29.731 [2024-07-14 21:10:41.153316] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x377fca0a0e20 00:11:29.731 [2024-07-14 21:10:41.153428] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x377fca035400 00:11:29.731 [2024-07-14 21:10:41.153433] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x377fca035400 00:11:29.731 [2024-07-14 21:10:41.153472] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:29.731 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:29.731 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:29.731 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:29.731 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:29.731 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:29.731 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:29.731 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:29.731 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:29.731 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:29.731 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:29.731 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:29.731 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:29.990 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:29.990 "name": "raid_bdev1", 00:11:29.990 "uuid": "869e67e6-4225-11ef-aa83-81fbc7dfef58", 00:11:29.990 "strip_size_kb": 64, 00:11:29.990 "state": "online", 00:11:29.990 "raid_level": "concat", 00:11:29.990 "superblock": true, 00:11:29.990 "num_base_bdevs": 3, 00:11:29.990 "num_base_bdevs_discovered": 3, 00:11:29.990 "num_base_bdevs_operational": 3, 00:11:29.990 "base_bdevs_list": [ 00:11:29.990 { 00:11:29.990 "name": "BaseBdev1", 00:11:29.990 "uuid": "8e0da878-67ba-9b58-807e-c9c13ba96733", 00:11:29.990 "is_configured": true, 00:11:29.990 "data_offset": 2048, 00:11:29.990 "data_size": 63488 00:11:29.990 }, 00:11:29.990 { 00:11:29.990 "name": "BaseBdev2", 00:11:29.990 "uuid": "121e2333-bb5f-a95f-9d1e-fdf0d980ca3c", 00:11:29.990 "is_configured": true, 00:11:29.990 "data_offset": 2048, 00:11:29.990 "data_size": 63488 00:11:29.990 }, 00:11:29.990 { 00:11:29.990 "name": "BaseBdev3", 00:11:29.990 "uuid": "7b5eb181-98ae-be52-b03e-3445d529a581", 00:11:29.990 "is_configured": true, 00:11:29.990 "data_offset": 2048, 00:11:29.990 "data_size": 63488 00:11:29.990 } 00:11:29.990 ] 00:11:29.990 }' 00:11:29.990 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:29.990 21:10:41 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:30.249 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:11:30.249 21:10:41 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:11:30.249 [2024-07-14 21:10:41.792833] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x377fca0a0ec0 00:11:31.629 21:10:42 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:11:31.629 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:11:31.629 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:11:31.629 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:11:31.629 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 3 00:11:31.629 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:11:31.629 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:31.629 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:11:31.629 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:11:31.629 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:31.629 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:31.629 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:31.629 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:31.629 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:31.629 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:31.629 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:11:31.896 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:31.896 "name": "raid_bdev1", 00:11:31.896 "uuid": "869e67e6-4225-11ef-aa83-81fbc7dfef58", 00:11:31.896 "strip_size_kb": 64, 00:11:31.896 "state": "online", 00:11:31.896 "raid_level": "concat", 00:11:31.896 "superblock": true, 00:11:31.896 "num_base_bdevs": 3, 00:11:31.896 "num_base_bdevs_discovered": 3, 00:11:31.896 "num_base_bdevs_operational": 3, 00:11:31.896 "base_bdevs_list": [ 00:11:31.896 { 00:11:31.896 "name": "BaseBdev1", 00:11:31.896 "uuid": "8e0da878-67ba-9b58-807e-c9c13ba96733", 00:11:31.896 "is_configured": true, 00:11:31.896 "data_offset": 2048, 00:11:31.896 "data_size": 63488 00:11:31.896 }, 00:11:31.896 { 00:11:31.896 "name": "BaseBdev2", 00:11:31.896 "uuid": "121e2333-bb5f-a95f-9d1e-fdf0d980ca3c", 00:11:31.896 "is_configured": true, 00:11:31.896 "data_offset": 2048, 00:11:31.896 "data_size": 63488 00:11:31.896 }, 00:11:31.896 { 00:11:31.896 "name": "BaseBdev3", 00:11:31.896 "uuid": "7b5eb181-98ae-be52-b03e-3445d529a581", 00:11:31.896 "is_configured": true, 00:11:31.896 "data_offset": 2048, 00:11:31.896 "data_size": 63488 00:11:31.896 } 00:11:31.896 ] 00:11:31.896 }' 00:11:31.896 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:31.896 21:10:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.154 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:11:32.412 [2024-07-14 21:10:43.798451] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:11:32.412 [2024-07-14 21:10:43.798477] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:32.412 [2024-07-14 21:10:43.798809] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:32.412 [2024-07-14 21:10:43.798818] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:32.412 [2024-07-14 21:10:43.798825] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:32.412 [2024-07-14 21:10:43.798845] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x377fca035400 name raid_bdev1, state offline 00:11:32.412 0 00:11:32.412 21:10:43 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 55910 00:11:32.412 21:10:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 55910 ']' 00:11:32.412 21:10:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 55910 00:11:32.412 21:10:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:11:32.412 21:10:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:32.412 21:10:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 55910 00:11:32.412 21:10:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:11:32.412 21:10:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:11:32.412 21:10:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:11:32.412 killing process with pid 55910 00:11:32.412 21:10:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 55910' 00:11:32.412 21:10:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 55910 00:11:32.412 [2024-07-14 21:10:43.826731] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:32.412 21:10:43 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 55910 00:11:32.412 [2024-07-14 21:10:43.843495] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:32.670 21:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.IH2vHYcMB8 00:11:32.670 21:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:11:32.670 21:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:11:32.670 21:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.50 00:11:32.670 21:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:11:32.670 21:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:32.670 21:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:11:32.670 21:10:44 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.50 != \0\.\0\0 ]] 00:11:32.670 00:11:32.670 real 0m6.274s 00:11:32.670 user 0m9.779s 00:11:32.670 sys 0m1.006s 00:11:32.670 21:10:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:32.670 21:10:44 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.670 ************************************ 00:11:32.670 END TEST raid_write_error_test 00:11:32.670 ************************************ 00:11:32.670 21:10:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:32.670 21:10:44 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:11:32.670 21:10:44 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 3 false 00:11:32.670 21:10:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:32.670 21:10:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:32.670 21:10:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:32.670 ************************************ 00:11:32.670 START TEST raid_state_function_test 00:11:32.670 ************************************ 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 false 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=56039 00:11:32.670 Process raid pid: 56039 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 56039' 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 56039 /var/tmp/spdk-raid.sock 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 56039 ']' 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:32.670 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:32.670 21:10:44 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:32.670 [2024-07-14 21:10:44.085250] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:32.670 [2024-07-14 21:10:44.085439] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:33.237 EAL: TSC is not safe to use in SMP mode 00:11:33.237 EAL: TSC is not invariant 00:11:33.237 [2024-07-14 21:10:44.632071] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:33.237 [2024-07-14 21:10:44.728912] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:33.237 [2024-07-14 21:10:44.731543] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:33.237 [2024-07-14 21:10:44.732364] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.237 [2024-07-14 21:10:44.732378] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:33.813 21:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:33.813 21:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:11:33.813 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:34.087 [2024-07-14 21:10:45.418613] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:34.087 [2024-07-14 21:10:45.418677] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:34.087 [2024-07-14 21:10:45.418698] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:34.087 [2024-07-14 21:10:45.418706] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:34.087 [2024-07-14 21:10:45.418709] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:34.087 [2024-07-14 21:10:45.418716] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:34.087 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:34.087 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:34.087 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:34.087 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:34.087 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:34.087 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:34.087 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:34.087 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:34.087 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:34.087 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:34.087 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:34.087 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:34.346 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:34.346 "name": "Existed_Raid", 00:11:34.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.346 "strip_size_kb": 0, 00:11:34.346 "state": "configuring", 00:11:34.346 "raid_level": "raid1", 00:11:34.346 "superblock": false, 00:11:34.346 "num_base_bdevs": 3, 00:11:34.346 "num_base_bdevs_discovered": 0, 00:11:34.346 "num_base_bdevs_operational": 3, 00:11:34.346 "base_bdevs_list": [ 00:11:34.346 { 00:11:34.346 "name": "BaseBdev1", 00:11:34.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.346 "is_configured": false, 00:11:34.346 "data_offset": 0, 00:11:34.346 "data_size": 0 00:11:34.346 }, 00:11:34.346 { 00:11:34.346 "name": "BaseBdev2", 00:11:34.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.346 "is_configured": false, 00:11:34.346 "data_offset": 0, 00:11:34.346 "data_size": 0 00:11:34.346 }, 00:11:34.346 { 00:11:34.346 "name": "BaseBdev3", 00:11:34.346 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:34.346 "is_configured": false, 00:11:34.346 "data_offset": 0, 00:11:34.346 "data_size": 0 00:11:34.346 } 00:11:34.346 ] 00:11:34.346 }' 00:11:34.346 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:34.346 21:10:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:34.605 21:10:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:34.864 [2024-07-14 21:10:46.210659] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:34.864 [2024-07-14 21:10:46.210679] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcde72634500 name Existed_Raid, state configuring 00:11:34.864 21:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:35.122 [2024-07-14 21:10:46.486669] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:35.122 [2024-07-14 21:10:46.486740] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:35.122 [2024-07-14 21:10:46.486744] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:35.122 [2024-07-14 21:10:46.486769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:35.122 [2024-07-14 21:10:46.486772] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:35.122 [2024-07-14 21:10:46.486778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:35.122 21:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:35.380 [2024-07-14 21:10:46.751694] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:35.380 BaseBdev1 00:11:35.380 21:10:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:35.380 21:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:35.380 21:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:35.380 21:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:35.380 21:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:35.380 21:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:35.380 21:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:35.639 21:10:46 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:35.898 [ 00:11:35.898 { 00:11:35.898 "name": "BaseBdev1", 00:11:35.898 "aliases": [ 00:11:35.898 "89f49b52-4225-11ef-aa83-81fbc7dfef58" 00:11:35.898 ], 00:11:35.898 "product_name": "Malloc disk", 00:11:35.898 "block_size": 512, 00:11:35.898 "num_blocks": 65536, 00:11:35.898 "uuid": "89f49b52-4225-11ef-aa83-81fbc7dfef58", 00:11:35.898 "assigned_rate_limits": { 00:11:35.898 "rw_ios_per_sec": 0, 00:11:35.898 "rw_mbytes_per_sec": 0, 00:11:35.898 "r_mbytes_per_sec": 0, 00:11:35.898 "w_mbytes_per_sec": 0 00:11:35.898 }, 00:11:35.898 "claimed": true, 00:11:35.898 "claim_type": "exclusive_write", 00:11:35.898 "zoned": false, 00:11:35.898 "supported_io_types": { 00:11:35.898 "read": true, 00:11:35.898 "write": true, 00:11:35.898 "unmap": true, 00:11:35.898 "flush": true, 00:11:35.898 "reset": true, 00:11:35.898 "nvme_admin": false, 00:11:35.898 "nvme_io": false, 00:11:35.898 "nvme_io_md": false, 00:11:35.898 "write_zeroes": true, 00:11:35.898 "zcopy": true, 00:11:35.898 "get_zone_info": false, 00:11:35.898 "zone_management": false, 00:11:35.898 "zone_append": false, 00:11:35.898 "compare": false, 00:11:35.898 "compare_and_write": false, 00:11:35.898 "abort": true, 00:11:35.898 "seek_hole": false, 00:11:35.898 "seek_data": false, 00:11:35.898 "copy": true, 00:11:35.898 "nvme_iov_md": false 00:11:35.898 }, 00:11:35.898 "memory_domains": [ 00:11:35.898 { 00:11:35.898 "dma_device_id": "system", 00:11:35.898 "dma_device_type": 1 00:11:35.898 }, 00:11:35.898 { 00:11:35.898 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:35.898 "dma_device_type": 2 00:11:35.898 } 00:11:35.898 ], 00:11:35.898 "driver_specific": {} 00:11:35.898 } 00:11:35.898 ] 00:11:35.898 21:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:35.898 21:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:35.898 21:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:35.898 21:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:35.898 21:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:35.898 21:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:35.898 21:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:35.898 21:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:35.898 21:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:35.898 21:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:35.898 21:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:35.898 21:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:35.898 21:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.157 21:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:36.157 "name": "Existed_Raid", 00:11:36.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.157 "strip_size_kb": 0, 00:11:36.157 "state": "configuring", 00:11:36.157 "raid_level": "raid1", 00:11:36.157 "superblock": false, 00:11:36.157 "num_base_bdevs": 3, 00:11:36.157 "num_base_bdevs_discovered": 1, 00:11:36.157 "num_base_bdevs_operational": 3, 00:11:36.157 "base_bdevs_list": [ 00:11:36.157 { 00:11:36.157 "name": "BaseBdev1", 00:11:36.157 "uuid": "89f49b52-4225-11ef-aa83-81fbc7dfef58", 00:11:36.157 "is_configured": true, 00:11:36.157 "data_offset": 0, 00:11:36.157 "data_size": 65536 00:11:36.157 }, 00:11:36.157 { 00:11:36.157 "name": "BaseBdev2", 00:11:36.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.157 "is_configured": false, 00:11:36.157 "data_offset": 0, 00:11:36.157 "data_size": 0 00:11:36.157 }, 00:11:36.157 { 00:11:36.157 "name": "BaseBdev3", 00:11:36.157 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:36.157 "is_configured": false, 00:11:36.157 "data_offset": 0, 00:11:36.157 "data_size": 0 00:11:36.157 } 00:11:36.157 ] 00:11:36.157 }' 00:11:36.157 21:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:36.157 21:10:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:36.416 21:10:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:36.674 [2024-07-14 21:10:48.118744] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:36.674 [2024-07-14 21:10:48.118785] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcde72634500 name Existed_Raid, state configuring 00:11:36.674 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:36.934 [2024-07-14 21:10:48.342756] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:36.934 [2024-07-14 21:10:48.343766] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:36.934 [2024-07-14 21:10:48.343826] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:36.934 [2024-07-14 21:10:48.343831] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:36.934 [2024-07-14 21:10:48.343868] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:36.934 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:36.934 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:36.934 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:36.934 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:36.934 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:36.934 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:36.934 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:36.934 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:36.934 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:36.934 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:36.934 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:36.934 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:36.934 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:36.934 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:37.193 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:37.193 "name": "Existed_Raid", 00:11:37.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.193 "strip_size_kb": 0, 00:11:37.193 "state": "configuring", 00:11:37.193 "raid_level": "raid1", 00:11:37.193 "superblock": false, 00:11:37.193 "num_base_bdevs": 3, 00:11:37.193 "num_base_bdevs_discovered": 1, 00:11:37.193 "num_base_bdevs_operational": 3, 00:11:37.193 "base_bdevs_list": [ 00:11:37.193 { 00:11:37.193 "name": "BaseBdev1", 00:11:37.193 "uuid": "89f49b52-4225-11ef-aa83-81fbc7dfef58", 00:11:37.193 "is_configured": true, 00:11:37.193 "data_offset": 0, 00:11:37.193 "data_size": 65536 00:11:37.193 }, 00:11:37.193 { 00:11:37.193 "name": "BaseBdev2", 00:11:37.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.193 "is_configured": false, 00:11:37.193 "data_offset": 0, 00:11:37.193 "data_size": 0 00:11:37.193 }, 00:11:37.193 { 00:11:37.193 "name": "BaseBdev3", 00:11:37.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:37.193 "is_configured": false, 00:11:37.193 "data_offset": 0, 00:11:37.193 "data_size": 0 00:11:37.193 } 00:11:37.193 ] 00:11:37.193 }' 00:11:37.193 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:37.193 21:10:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:37.451 21:10:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:37.710 [2024-07-14 21:10:49.202952] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:37.710 BaseBdev2 00:11:37.710 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:11:37.710 21:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:37.710 21:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:37.710 21:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:37.710 21:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:37.710 21:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:37.710 21:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:37.969 21:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:38.228 [ 00:11:38.228 { 00:11:38.228 "name": "BaseBdev2", 00:11:38.228 "aliases": [ 00:11:38.228 "8b6ac55f-4225-11ef-aa83-81fbc7dfef58" 00:11:38.228 ], 00:11:38.228 "product_name": "Malloc disk", 00:11:38.228 "block_size": 512, 00:11:38.228 "num_blocks": 65536, 00:11:38.228 "uuid": "8b6ac55f-4225-11ef-aa83-81fbc7dfef58", 00:11:38.228 "assigned_rate_limits": { 00:11:38.228 "rw_ios_per_sec": 0, 00:11:38.228 "rw_mbytes_per_sec": 0, 00:11:38.228 "r_mbytes_per_sec": 0, 00:11:38.228 "w_mbytes_per_sec": 0 00:11:38.228 }, 00:11:38.228 "claimed": true, 00:11:38.228 "claim_type": "exclusive_write", 00:11:38.228 "zoned": false, 00:11:38.228 "supported_io_types": { 00:11:38.228 "read": true, 00:11:38.228 "write": true, 00:11:38.228 "unmap": true, 00:11:38.228 "flush": true, 00:11:38.228 "reset": true, 00:11:38.228 "nvme_admin": false, 00:11:38.228 "nvme_io": false, 00:11:38.228 "nvme_io_md": false, 00:11:38.228 "write_zeroes": true, 00:11:38.228 "zcopy": true, 00:11:38.228 "get_zone_info": false, 00:11:38.228 "zone_management": false, 00:11:38.228 "zone_append": false, 00:11:38.228 "compare": false, 00:11:38.228 "compare_and_write": false, 00:11:38.228 "abort": true, 00:11:38.228 "seek_hole": false, 00:11:38.228 "seek_data": false, 00:11:38.228 "copy": true, 00:11:38.228 "nvme_iov_md": false 00:11:38.228 }, 00:11:38.228 "memory_domains": [ 00:11:38.228 { 00:11:38.228 "dma_device_id": "system", 00:11:38.228 "dma_device_type": 1 00:11:38.228 }, 00:11:38.228 { 00:11:38.228 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:38.228 "dma_device_type": 2 00:11:38.228 } 00:11:38.228 ], 00:11:38.228 "driver_specific": {} 00:11:38.228 } 00:11:38.228 ] 00:11:38.228 21:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:38.228 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:38.228 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:38.228 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:38.228 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:38.228 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:38.228 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:38.228 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:38.228 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:38.228 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:38.228 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:38.228 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:38.228 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:38.228 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:38.228 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:38.487 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:38.487 "name": "Existed_Raid", 00:11:38.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.487 "strip_size_kb": 0, 00:11:38.487 "state": "configuring", 00:11:38.487 "raid_level": "raid1", 00:11:38.487 "superblock": false, 00:11:38.487 "num_base_bdevs": 3, 00:11:38.487 "num_base_bdevs_discovered": 2, 00:11:38.487 "num_base_bdevs_operational": 3, 00:11:38.487 "base_bdevs_list": [ 00:11:38.487 { 00:11:38.487 "name": "BaseBdev1", 00:11:38.487 "uuid": "89f49b52-4225-11ef-aa83-81fbc7dfef58", 00:11:38.487 "is_configured": true, 00:11:38.487 "data_offset": 0, 00:11:38.487 "data_size": 65536 00:11:38.487 }, 00:11:38.487 { 00:11:38.487 "name": "BaseBdev2", 00:11:38.487 "uuid": "8b6ac55f-4225-11ef-aa83-81fbc7dfef58", 00:11:38.487 "is_configured": true, 00:11:38.487 "data_offset": 0, 00:11:38.487 "data_size": 65536 00:11:38.487 }, 00:11:38.487 { 00:11:38.487 "name": "BaseBdev3", 00:11:38.487 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:38.487 "is_configured": false, 00:11:38.487 "data_offset": 0, 00:11:38.487 "data_size": 0 00:11:38.487 } 00:11:38.487 ] 00:11:38.487 }' 00:11:38.487 21:10:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:38.487 21:10:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:38.745 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:39.004 [2024-07-14 21:10:50.406971] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:39.004 [2024-07-14 21:10:50.406994] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xcde72634a00 00:11:39.004 [2024-07-14 21:10:50.407014] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:39.004 [2024-07-14 21:10:50.407033] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcde72697e20 00:11:39.004 [2024-07-14 21:10:50.407123] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xcde72634a00 00:11:39.004 [2024-07-14 21:10:50.407128] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xcde72634a00 00:11:39.004 [2024-07-14 21:10:50.407158] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:39.004 BaseBdev3 00:11:39.004 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:11:39.004 21:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:39.004 21:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:39.004 21:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:39.004 21:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:39.004 21:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:39.004 21:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:39.262 21:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:39.521 [ 00:11:39.521 { 00:11:39.521 "name": "BaseBdev3", 00:11:39.521 "aliases": [ 00:11:39.521 "8c227de8-4225-11ef-aa83-81fbc7dfef58" 00:11:39.521 ], 00:11:39.521 "product_name": "Malloc disk", 00:11:39.521 "block_size": 512, 00:11:39.521 "num_blocks": 65536, 00:11:39.521 "uuid": "8c227de8-4225-11ef-aa83-81fbc7dfef58", 00:11:39.521 "assigned_rate_limits": { 00:11:39.521 "rw_ios_per_sec": 0, 00:11:39.521 "rw_mbytes_per_sec": 0, 00:11:39.521 "r_mbytes_per_sec": 0, 00:11:39.521 "w_mbytes_per_sec": 0 00:11:39.521 }, 00:11:39.521 "claimed": true, 00:11:39.521 "claim_type": "exclusive_write", 00:11:39.521 "zoned": false, 00:11:39.521 "supported_io_types": { 00:11:39.521 "read": true, 00:11:39.521 "write": true, 00:11:39.521 "unmap": true, 00:11:39.521 "flush": true, 00:11:39.521 "reset": true, 00:11:39.521 "nvme_admin": false, 00:11:39.521 "nvme_io": false, 00:11:39.521 "nvme_io_md": false, 00:11:39.521 "write_zeroes": true, 00:11:39.521 "zcopy": true, 00:11:39.521 "get_zone_info": false, 00:11:39.521 "zone_management": false, 00:11:39.521 "zone_append": false, 00:11:39.521 "compare": false, 00:11:39.521 "compare_and_write": false, 00:11:39.521 "abort": true, 00:11:39.521 "seek_hole": false, 00:11:39.521 "seek_data": false, 00:11:39.521 "copy": true, 00:11:39.521 "nvme_iov_md": false 00:11:39.521 }, 00:11:39.521 "memory_domains": [ 00:11:39.521 { 00:11:39.521 "dma_device_id": "system", 00:11:39.521 "dma_device_type": 1 00:11:39.521 }, 00:11:39.521 { 00:11:39.521 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:39.521 "dma_device_type": 2 00:11:39.521 } 00:11:39.521 ], 00:11:39.521 "driver_specific": {} 00:11:39.521 } 00:11:39.521 ] 00:11:39.521 21:10:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:39.521 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:11:39.521 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:39.521 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:39.521 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:39.521 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:39.521 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:39.521 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:39.521 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:39.521 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:39.521 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:39.521 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:39.521 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:39.521 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:39.521 21:10:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:39.780 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:39.780 "name": "Existed_Raid", 00:11:39.780 "uuid": "8c228429-4225-11ef-aa83-81fbc7dfef58", 00:11:39.780 "strip_size_kb": 0, 00:11:39.780 "state": "online", 00:11:39.780 "raid_level": "raid1", 00:11:39.780 "superblock": false, 00:11:39.780 "num_base_bdevs": 3, 00:11:39.780 "num_base_bdevs_discovered": 3, 00:11:39.780 "num_base_bdevs_operational": 3, 00:11:39.780 "base_bdevs_list": [ 00:11:39.780 { 00:11:39.780 "name": "BaseBdev1", 00:11:39.780 "uuid": "89f49b52-4225-11ef-aa83-81fbc7dfef58", 00:11:39.780 "is_configured": true, 00:11:39.780 "data_offset": 0, 00:11:39.780 "data_size": 65536 00:11:39.780 }, 00:11:39.780 { 00:11:39.780 "name": "BaseBdev2", 00:11:39.780 "uuid": "8b6ac55f-4225-11ef-aa83-81fbc7dfef58", 00:11:39.780 "is_configured": true, 00:11:39.780 "data_offset": 0, 00:11:39.780 "data_size": 65536 00:11:39.780 }, 00:11:39.780 { 00:11:39.780 "name": "BaseBdev3", 00:11:39.780 "uuid": "8c227de8-4225-11ef-aa83-81fbc7dfef58", 00:11:39.780 "is_configured": true, 00:11:39.780 "data_offset": 0, 00:11:39.780 "data_size": 65536 00:11:39.780 } 00:11:39.780 ] 00:11:39.780 }' 00:11:39.780 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:39.780 21:10:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:40.038 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:11:40.038 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:40.038 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:40.038 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:40.038 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:40.038 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:40.038 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:40.038 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:40.297 [2024-07-14 21:10:51.662904] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:40.297 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:40.297 "name": "Existed_Raid", 00:11:40.297 "aliases": [ 00:11:40.297 "8c228429-4225-11ef-aa83-81fbc7dfef58" 00:11:40.297 ], 00:11:40.297 "product_name": "Raid Volume", 00:11:40.297 "block_size": 512, 00:11:40.297 "num_blocks": 65536, 00:11:40.297 "uuid": "8c228429-4225-11ef-aa83-81fbc7dfef58", 00:11:40.297 "assigned_rate_limits": { 00:11:40.297 "rw_ios_per_sec": 0, 00:11:40.297 "rw_mbytes_per_sec": 0, 00:11:40.297 "r_mbytes_per_sec": 0, 00:11:40.297 "w_mbytes_per_sec": 0 00:11:40.297 }, 00:11:40.297 "claimed": false, 00:11:40.297 "zoned": false, 00:11:40.297 "supported_io_types": { 00:11:40.297 "read": true, 00:11:40.297 "write": true, 00:11:40.297 "unmap": false, 00:11:40.297 "flush": false, 00:11:40.297 "reset": true, 00:11:40.297 "nvme_admin": false, 00:11:40.297 "nvme_io": false, 00:11:40.297 "nvme_io_md": false, 00:11:40.297 "write_zeroes": true, 00:11:40.297 "zcopy": false, 00:11:40.297 "get_zone_info": false, 00:11:40.297 "zone_management": false, 00:11:40.297 "zone_append": false, 00:11:40.297 "compare": false, 00:11:40.297 "compare_and_write": false, 00:11:40.297 "abort": false, 00:11:40.297 "seek_hole": false, 00:11:40.297 "seek_data": false, 00:11:40.297 "copy": false, 00:11:40.297 "nvme_iov_md": false 00:11:40.297 }, 00:11:40.297 "memory_domains": [ 00:11:40.297 { 00:11:40.297 "dma_device_id": "system", 00:11:40.297 "dma_device_type": 1 00:11:40.297 }, 00:11:40.297 { 00:11:40.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.297 "dma_device_type": 2 00:11:40.297 }, 00:11:40.297 { 00:11:40.297 "dma_device_id": "system", 00:11:40.297 "dma_device_type": 1 00:11:40.297 }, 00:11:40.297 { 00:11:40.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.297 "dma_device_type": 2 00:11:40.297 }, 00:11:40.297 { 00:11:40.297 "dma_device_id": "system", 00:11:40.297 "dma_device_type": 1 00:11:40.297 }, 00:11:40.297 { 00:11:40.297 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.297 "dma_device_type": 2 00:11:40.297 } 00:11:40.297 ], 00:11:40.297 "driver_specific": { 00:11:40.297 "raid": { 00:11:40.297 "uuid": "8c228429-4225-11ef-aa83-81fbc7dfef58", 00:11:40.297 "strip_size_kb": 0, 00:11:40.297 "state": "online", 00:11:40.297 "raid_level": "raid1", 00:11:40.297 "superblock": false, 00:11:40.297 "num_base_bdevs": 3, 00:11:40.297 "num_base_bdevs_discovered": 3, 00:11:40.297 "num_base_bdevs_operational": 3, 00:11:40.297 "base_bdevs_list": [ 00:11:40.297 { 00:11:40.297 "name": "BaseBdev1", 00:11:40.297 "uuid": "89f49b52-4225-11ef-aa83-81fbc7dfef58", 00:11:40.297 "is_configured": true, 00:11:40.297 "data_offset": 0, 00:11:40.297 "data_size": 65536 00:11:40.297 }, 00:11:40.297 { 00:11:40.297 "name": "BaseBdev2", 00:11:40.297 "uuid": "8b6ac55f-4225-11ef-aa83-81fbc7dfef58", 00:11:40.297 "is_configured": true, 00:11:40.297 "data_offset": 0, 00:11:40.297 "data_size": 65536 00:11:40.297 }, 00:11:40.297 { 00:11:40.297 "name": "BaseBdev3", 00:11:40.297 "uuid": "8c227de8-4225-11ef-aa83-81fbc7dfef58", 00:11:40.297 "is_configured": true, 00:11:40.297 "data_offset": 0, 00:11:40.297 "data_size": 65536 00:11:40.297 } 00:11:40.297 ] 00:11:40.297 } 00:11:40.297 } 00:11:40.297 }' 00:11:40.297 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:40.297 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:11:40.297 BaseBdev2 00:11:40.297 BaseBdev3' 00:11:40.297 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:40.297 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:11:40.297 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:40.555 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:40.555 "name": "BaseBdev1", 00:11:40.555 "aliases": [ 00:11:40.555 "89f49b52-4225-11ef-aa83-81fbc7dfef58" 00:11:40.555 ], 00:11:40.555 "product_name": "Malloc disk", 00:11:40.555 "block_size": 512, 00:11:40.555 "num_blocks": 65536, 00:11:40.555 "uuid": "89f49b52-4225-11ef-aa83-81fbc7dfef58", 00:11:40.555 "assigned_rate_limits": { 00:11:40.555 "rw_ios_per_sec": 0, 00:11:40.555 "rw_mbytes_per_sec": 0, 00:11:40.555 "r_mbytes_per_sec": 0, 00:11:40.555 "w_mbytes_per_sec": 0 00:11:40.555 }, 00:11:40.555 "claimed": true, 00:11:40.555 "claim_type": "exclusive_write", 00:11:40.555 "zoned": false, 00:11:40.555 "supported_io_types": { 00:11:40.555 "read": true, 00:11:40.555 "write": true, 00:11:40.555 "unmap": true, 00:11:40.555 "flush": true, 00:11:40.555 "reset": true, 00:11:40.555 "nvme_admin": false, 00:11:40.555 "nvme_io": false, 00:11:40.555 "nvme_io_md": false, 00:11:40.555 "write_zeroes": true, 00:11:40.555 "zcopy": true, 00:11:40.555 "get_zone_info": false, 00:11:40.555 "zone_management": false, 00:11:40.555 "zone_append": false, 00:11:40.555 "compare": false, 00:11:40.555 "compare_and_write": false, 00:11:40.555 "abort": true, 00:11:40.555 "seek_hole": false, 00:11:40.555 "seek_data": false, 00:11:40.555 "copy": true, 00:11:40.555 "nvme_iov_md": false 00:11:40.555 }, 00:11:40.555 "memory_domains": [ 00:11:40.555 { 00:11:40.555 "dma_device_id": "system", 00:11:40.555 "dma_device_type": 1 00:11:40.555 }, 00:11:40.555 { 00:11:40.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.555 "dma_device_type": 2 00:11:40.555 } 00:11:40.555 ], 00:11:40.555 "driver_specific": {} 00:11:40.555 }' 00:11:40.555 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.555 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.555 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:40.555 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:40.555 21:10:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:40.555 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:40.555 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:40.555 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:40.555 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:40.555 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:40.555 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:40.555 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:40.555 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:40.555 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:40.555 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:40.813 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:40.813 "name": "BaseBdev2", 00:11:40.813 "aliases": [ 00:11:40.813 "8b6ac55f-4225-11ef-aa83-81fbc7dfef58" 00:11:40.813 ], 00:11:40.813 "product_name": "Malloc disk", 00:11:40.813 "block_size": 512, 00:11:40.813 "num_blocks": 65536, 00:11:40.813 "uuid": "8b6ac55f-4225-11ef-aa83-81fbc7dfef58", 00:11:40.813 "assigned_rate_limits": { 00:11:40.813 "rw_ios_per_sec": 0, 00:11:40.813 "rw_mbytes_per_sec": 0, 00:11:40.813 "r_mbytes_per_sec": 0, 00:11:40.813 "w_mbytes_per_sec": 0 00:11:40.813 }, 00:11:40.813 "claimed": true, 00:11:40.813 "claim_type": "exclusive_write", 00:11:40.813 "zoned": false, 00:11:40.813 "supported_io_types": { 00:11:40.813 "read": true, 00:11:40.813 "write": true, 00:11:40.813 "unmap": true, 00:11:40.813 "flush": true, 00:11:40.813 "reset": true, 00:11:40.813 "nvme_admin": false, 00:11:40.813 "nvme_io": false, 00:11:40.813 "nvme_io_md": false, 00:11:40.813 "write_zeroes": true, 00:11:40.813 "zcopy": true, 00:11:40.813 "get_zone_info": false, 00:11:40.813 "zone_management": false, 00:11:40.813 "zone_append": false, 00:11:40.813 "compare": false, 00:11:40.813 "compare_and_write": false, 00:11:40.813 "abort": true, 00:11:40.813 "seek_hole": false, 00:11:40.813 "seek_data": false, 00:11:40.813 "copy": true, 00:11:40.813 "nvme_iov_md": false 00:11:40.813 }, 00:11:40.813 "memory_domains": [ 00:11:40.813 { 00:11:40.813 "dma_device_id": "system", 00:11:40.813 "dma_device_type": 1 00:11:40.813 }, 00:11:40.813 { 00:11:40.813 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:40.813 "dma_device_type": 2 00:11:40.813 } 00:11:40.813 ], 00:11:40.813 "driver_specific": {} 00:11:40.813 }' 00:11:40.813 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.813 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:40.813 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:40.813 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:40.813 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:40.813 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:40.813 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:40.813 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:41.071 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:41.071 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:41.071 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:41.071 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:41.071 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:41.071 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:41.071 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:41.328 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:41.328 "name": "BaseBdev3", 00:11:41.328 "aliases": [ 00:11:41.328 "8c227de8-4225-11ef-aa83-81fbc7dfef58" 00:11:41.328 ], 00:11:41.328 "product_name": "Malloc disk", 00:11:41.328 "block_size": 512, 00:11:41.328 "num_blocks": 65536, 00:11:41.328 "uuid": "8c227de8-4225-11ef-aa83-81fbc7dfef58", 00:11:41.328 "assigned_rate_limits": { 00:11:41.328 "rw_ios_per_sec": 0, 00:11:41.328 "rw_mbytes_per_sec": 0, 00:11:41.328 "r_mbytes_per_sec": 0, 00:11:41.328 "w_mbytes_per_sec": 0 00:11:41.328 }, 00:11:41.328 "claimed": true, 00:11:41.328 "claim_type": "exclusive_write", 00:11:41.328 "zoned": false, 00:11:41.328 "supported_io_types": { 00:11:41.328 "read": true, 00:11:41.328 "write": true, 00:11:41.328 "unmap": true, 00:11:41.328 "flush": true, 00:11:41.328 "reset": true, 00:11:41.328 "nvme_admin": false, 00:11:41.328 "nvme_io": false, 00:11:41.328 "nvme_io_md": false, 00:11:41.328 "write_zeroes": true, 00:11:41.328 "zcopy": true, 00:11:41.328 "get_zone_info": false, 00:11:41.328 "zone_management": false, 00:11:41.328 "zone_append": false, 00:11:41.328 "compare": false, 00:11:41.328 "compare_and_write": false, 00:11:41.328 "abort": true, 00:11:41.328 "seek_hole": false, 00:11:41.328 "seek_data": false, 00:11:41.328 "copy": true, 00:11:41.328 "nvme_iov_md": false 00:11:41.328 }, 00:11:41.328 "memory_domains": [ 00:11:41.328 { 00:11:41.328 "dma_device_id": "system", 00:11:41.328 "dma_device_type": 1 00:11:41.328 }, 00:11:41.328 { 00:11:41.328 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:41.328 "dma_device_type": 2 00:11:41.328 } 00:11:41.328 ], 00:11:41.328 "driver_specific": {} 00:11:41.328 }' 00:11:41.328 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:41.328 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:41.328 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:41.328 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:41.328 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:41.328 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:41.329 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:41.329 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:41.329 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:41.329 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:41.329 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:41.329 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:41.329 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:41.586 [2024-07-14 21:10:52.970947] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:41.586 21:10:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:41.844 21:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:41.844 "name": "Existed_Raid", 00:11:41.844 "uuid": "8c228429-4225-11ef-aa83-81fbc7dfef58", 00:11:41.844 "strip_size_kb": 0, 00:11:41.844 "state": "online", 00:11:41.844 "raid_level": "raid1", 00:11:41.844 "superblock": false, 00:11:41.844 "num_base_bdevs": 3, 00:11:41.844 "num_base_bdevs_discovered": 2, 00:11:41.844 "num_base_bdevs_operational": 2, 00:11:41.844 "base_bdevs_list": [ 00:11:41.844 { 00:11:41.844 "name": null, 00:11:41.844 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:41.844 "is_configured": false, 00:11:41.844 "data_offset": 0, 00:11:41.844 "data_size": 65536 00:11:41.844 }, 00:11:41.844 { 00:11:41.844 "name": "BaseBdev2", 00:11:41.844 "uuid": "8b6ac55f-4225-11ef-aa83-81fbc7dfef58", 00:11:41.844 "is_configured": true, 00:11:41.844 "data_offset": 0, 00:11:41.844 "data_size": 65536 00:11:41.844 }, 00:11:41.844 { 00:11:41.844 "name": "BaseBdev3", 00:11:41.844 "uuid": "8c227de8-4225-11ef-aa83-81fbc7dfef58", 00:11:41.844 "is_configured": true, 00:11:41.844 "data_offset": 0, 00:11:41.844 "data_size": 65536 00:11:41.844 } 00:11:41.844 ] 00:11:41.844 }' 00:11:41.844 21:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:41.844 21:10:53 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:42.101 21:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:11:42.101 21:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:42.101 21:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.101 21:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:42.359 21:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:42.360 21:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:42.360 21:10:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:11:42.618 [2024-07-14 21:10:54.073183] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:42.618 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:42.618 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:42.618 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:42.618 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:11:42.876 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:11:42.877 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:11:42.877 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:11:43.141 [2024-07-14 21:10:54.551305] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:43.141 [2024-07-14 21:10:54.551351] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:43.141 [2024-07-14 21:10:54.557495] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:43.141 [2024-07-14 21:10:54.557525] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:43.141 [2024-07-14 21:10:54.557529] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcde72634a00 name Existed_Raid, state offline 00:11:43.141 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:11:43.141 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:11:43.141 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:43.141 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:11:43.424 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:11:43.424 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:11:43.424 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:11:43.424 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:11:43.424 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:43.424 21:10:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:11:43.695 BaseBdev2 00:11:43.695 21:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:11:43.695 21:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:11:43.695 21:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:43.695 21:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:43.695 21:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:43.696 21:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:43.696 21:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:43.953 21:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:11:43.953 [ 00:11:43.953 { 00:11:43.953 "name": "BaseBdev2", 00:11:43.953 "aliases": [ 00:11:43.953 "8edef975-4225-11ef-aa83-81fbc7dfef58" 00:11:43.953 ], 00:11:43.953 "product_name": "Malloc disk", 00:11:43.953 "block_size": 512, 00:11:43.953 "num_blocks": 65536, 00:11:43.953 "uuid": "8edef975-4225-11ef-aa83-81fbc7dfef58", 00:11:43.953 "assigned_rate_limits": { 00:11:43.953 "rw_ios_per_sec": 0, 00:11:43.953 "rw_mbytes_per_sec": 0, 00:11:43.953 "r_mbytes_per_sec": 0, 00:11:43.953 "w_mbytes_per_sec": 0 00:11:43.953 }, 00:11:43.953 "claimed": false, 00:11:43.953 "zoned": false, 00:11:43.953 "supported_io_types": { 00:11:43.953 "read": true, 00:11:43.953 "write": true, 00:11:43.953 "unmap": true, 00:11:43.953 "flush": true, 00:11:43.953 "reset": true, 00:11:43.953 "nvme_admin": false, 00:11:43.953 "nvme_io": false, 00:11:43.953 "nvme_io_md": false, 00:11:43.953 "write_zeroes": true, 00:11:43.953 "zcopy": true, 00:11:43.953 "get_zone_info": false, 00:11:43.953 "zone_management": false, 00:11:43.953 "zone_append": false, 00:11:43.953 "compare": false, 00:11:43.953 "compare_and_write": false, 00:11:43.953 "abort": true, 00:11:43.953 "seek_hole": false, 00:11:43.953 "seek_data": false, 00:11:43.953 "copy": true, 00:11:43.953 "nvme_iov_md": false 00:11:43.953 }, 00:11:43.953 "memory_domains": [ 00:11:43.953 { 00:11:43.953 "dma_device_id": "system", 00:11:43.953 "dma_device_type": 1 00:11:43.953 }, 00:11:43.953 { 00:11:43.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:43.954 "dma_device_type": 2 00:11:43.954 } 00:11:43.954 ], 00:11:43.954 "driver_specific": {} 00:11:43.954 } 00:11:43.954 ] 00:11:43.954 21:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:43.954 21:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:43.954 21:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:43.954 21:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:11:44.211 BaseBdev3 00:11:44.211 21:10:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:11:44.212 21:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:11:44.212 21:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:44.212 21:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:44.212 21:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:44.212 21:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:44.212 21:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:44.469 21:10:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:11:44.727 [ 00:11:44.727 { 00:11:44.727 "name": "BaseBdev3", 00:11:44.727 "aliases": [ 00:11:44.727 "8f47f584-4225-11ef-aa83-81fbc7dfef58" 00:11:44.727 ], 00:11:44.727 "product_name": "Malloc disk", 00:11:44.727 "block_size": 512, 00:11:44.727 "num_blocks": 65536, 00:11:44.727 "uuid": "8f47f584-4225-11ef-aa83-81fbc7dfef58", 00:11:44.727 "assigned_rate_limits": { 00:11:44.727 "rw_ios_per_sec": 0, 00:11:44.727 "rw_mbytes_per_sec": 0, 00:11:44.727 "r_mbytes_per_sec": 0, 00:11:44.727 "w_mbytes_per_sec": 0 00:11:44.727 }, 00:11:44.727 "claimed": false, 00:11:44.727 "zoned": false, 00:11:44.727 "supported_io_types": { 00:11:44.727 "read": true, 00:11:44.727 "write": true, 00:11:44.727 "unmap": true, 00:11:44.727 "flush": true, 00:11:44.727 "reset": true, 00:11:44.727 "nvme_admin": false, 00:11:44.727 "nvme_io": false, 00:11:44.727 "nvme_io_md": false, 00:11:44.727 "write_zeroes": true, 00:11:44.727 "zcopy": true, 00:11:44.727 "get_zone_info": false, 00:11:44.727 "zone_management": false, 00:11:44.727 "zone_append": false, 00:11:44.727 "compare": false, 00:11:44.727 "compare_and_write": false, 00:11:44.727 "abort": true, 00:11:44.727 "seek_hole": false, 00:11:44.727 "seek_data": false, 00:11:44.727 "copy": true, 00:11:44.727 "nvme_iov_md": false 00:11:44.727 }, 00:11:44.727 "memory_domains": [ 00:11:44.727 { 00:11:44.727 "dma_device_id": "system", 00:11:44.727 "dma_device_type": 1 00:11:44.727 }, 00:11:44.727 { 00:11:44.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:44.727 "dma_device_type": 2 00:11:44.727 } 00:11:44.727 ], 00:11:44.727 "driver_specific": {} 00:11:44.728 } 00:11:44.728 ] 00:11:44.728 21:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:44.728 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:11:44.728 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:11:44.728 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:44.985 [2024-07-14 21:10:56.441602] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:44.986 [2024-07-14 21:10:56.441660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:44.986 [2024-07-14 21:10:56.441686] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:44.986 [2024-07-14 21:10:56.442287] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:44.986 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:44.986 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:44.986 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:44.986 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:44.986 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:44.986 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:44.986 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:44.986 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:44.986 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:44.986 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:44.986 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:44.986 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:45.243 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:45.243 "name": "Existed_Raid", 00:11:45.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.243 "strip_size_kb": 0, 00:11:45.243 "state": "configuring", 00:11:45.243 "raid_level": "raid1", 00:11:45.243 "superblock": false, 00:11:45.243 "num_base_bdevs": 3, 00:11:45.243 "num_base_bdevs_discovered": 2, 00:11:45.243 "num_base_bdevs_operational": 3, 00:11:45.243 "base_bdevs_list": [ 00:11:45.243 { 00:11:45.243 "name": "BaseBdev1", 00:11:45.243 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:45.243 "is_configured": false, 00:11:45.243 "data_offset": 0, 00:11:45.243 "data_size": 0 00:11:45.243 }, 00:11:45.243 { 00:11:45.243 "name": "BaseBdev2", 00:11:45.243 "uuid": "8edef975-4225-11ef-aa83-81fbc7dfef58", 00:11:45.243 "is_configured": true, 00:11:45.243 "data_offset": 0, 00:11:45.243 "data_size": 65536 00:11:45.243 }, 00:11:45.243 { 00:11:45.243 "name": "BaseBdev3", 00:11:45.243 "uuid": "8f47f584-4225-11ef-aa83-81fbc7dfef58", 00:11:45.243 "is_configured": true, 00:11:45.243 "data_offset": 0, 00:11:45.243 "data_size": 65536 00:11:45.243 } 00:11:45.243 ] 00:11:45.243 }' 00:11:45.243 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:45.243 21:10:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:45.501 21:10:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:11:45.760 [2024-07-14 21:10:57.217624] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:11:45.760 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:45.760 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:45.760 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:45.760 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:45.760 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:45.760 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:45.760 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:45.760 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:45.760 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:45.760 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:45.760 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:45.760 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:46.019 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:46.019 "name": "Existed_Raid", 00:11:46.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.019 "strip_size_kb": 0, 00:11:46.019 "state": "configuring", 00:11:46.019 "raid_level": "raid1", 00:11:46.019 "superblock": false, 00:11:46.019 "num_base_bdevs": 3, 00:11:46.019 "num_base_bdevs_discovered": 1, 00:11:46.019 "num_base_bdevs_operational": 3, 00:11:46.019 "base_bdevs_list": [ 00:11:46.019 { 00:11:46.019 "name": "BaseBdev1", 00:11:46.019 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:46.019 "is_configured": false, 00:11:46.019 "data_offset": 0, 00:11:46.019 "data_size": 0 00:11:46.019 }, 00:11:46.019 { 00:11:46.019 "name": null, 00:11:46.019 "uuid": "8edef975-4225-11ef-aa83-81fbc7dfef58", 00:11:46.019 "is_configured": false, 00:11:46.019 "data_offset": 0, 00:11:46.019 "data_size": 65536 00:11:46.019 }, 00:11:46.019 { 00:11:46.019 "name": "BaseBdev3", 00:11:46.019 "uuid": "8f47f584-4225-11ef-aa83-81fbc7dfef58", 00:11:46.019 "is_configured": true, 00:11:46.019 "data_offset": 0, 00:11:46.019 "data_size": 65536 00:11:46.019 } 00:11:46.019 ] 00:11:46.019 }' 00:11:46.019 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:46.019 21:10:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:46.277 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:46.277 21:10:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:46.535 21:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:11:46.535 21:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:46.793 [2024-07-14 21:10:58.253794] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:46.793 BaseBdev1 00:11:46.793 21:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:11:46.793 21:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:46.793 21:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:46.793 21:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:46.793 21:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:46.793 21:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:46.793 21:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:47.051 21:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:47.309 [ 00:11:47.309 { 00:11:47.309 "name": "BaseBdev1", 00:11:47.309 "aliases": [ 00:11:47.309 "90cfd24b-4225-11ef-aa83-81fbc7dfef58" 00:11:47.309 ], 00:11:47.309 "product_name": "Malloc disk", 00:11:47.309 "block_size": 512, 00:11:47.309 "num_blocks": 65536, 00:11:47.309 "uuid": "90cfd24b-4225-11ef-aa83-81fbc7dfef58", 00:11:47.309 "assigned_rate_limits": { 00:11:47.309 "rw_ios_per_sec": 0, 00:11:47.309 "rw_mbytes_per_sec": 0, 00:11:47.309 "r_mbytes_per_sec": 0, 00:11:47.309 "w_mbytes_per_sec": 0 00:11:47.309 }, 00:11:47.309 "claimed": true, 00:11:47.309 "claim_type": "exclusive_write", 00:11:47.309 "zoned": false, 00:11:47.309 "supported_io_types": { 00:11:47.309 "read": true, 00:11:47.309 "write": true, 00:11:47.309 "unmap": true, 00:11:47.309 "flush": true, 00:11:47.309 "reset": true, 00:11:47.309 "nvme_admin": false, 00:11:47.309 "nvme_io": false, 00:11:47.309 "nvme_io_md": false, 00:11:47.309 "write_zeroes": true, 00:11:47.309 "zcopy": true, 00:11:47.309 "get_zone_info": false, 00:11:47.309 "zone_management": false, 00:11:47.309 "zone_append": false, 00:11:47.309 "compare": false, 00:11:47.309 "compare_and_write": false, 00:11:47.309 "abort": true, 00:11:47.309 "seek_hole": false, 00:11:47.309 "seek_data": false, 00:11:47.309 "copy": true, 00:11:47.309 "nvme_iov_md": false 00:11:47.309 }, 00:11:47.309 "memory_domains": [ 00:11:47.309 { 00:11:47.309 "dma_device_id": "system", 00:11:47.309 "dma_device_type": 1 00:11:47.309 }, 00:11:47.309 { 00:11:47.309 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:47.309 "dma_device_type": 2 00:11:47.309 } 00:11:47.309 ], 00:11:47.309 "driver_specific": {} 00:11:47.309 } 00:11:47.309 ] 00:11:47.309 21:10:58 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:47.309 21:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:47.309 21:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:47.309 21:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:47.309 21:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:47.309 21:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:47.309 21:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:47.309 21:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:47.309 21:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:47.309 21:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:47.309 21:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:47.309 21:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:47.309 21:10:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:47.568 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:47.568 "name": "Existed_Raid", 00:11:47.568 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:47.568 "strip_size_kb": 0, 00:11:47.568 "state": "configuring", 00:11:47.568 "raid_level": "raid1", 00:11:47.568 "superblock": false, 00:11:47.568 "num_base_bdevs": 3, 00:11:47.568 "num_base_bdevs_discovered": 2, 00:11:47.568 "num_base_bdevs_operational": 3, 00:11:47.568 "base_bdevs_list": [ 00:11:47.568 { 00:11:47.569 "name": "BaseBdev1", 00:11:47.569 "uuid": "90cfd24b-4225-11ef-aa83-81fbc7dfef58", 00:11:47.569 "is_configured": true, 00:11:47.569 "data_offset": 0, 00:11:47.569 "data_size": 65536 00:11:47.569 }, 00:11:47.569 { 00:11:47.569 "name": null, 00:11:47.569 "uuid": "8edef975-4225-11ef-aa83-81fbc7dfef58", 00:11:47.569 "is_configured": false, 00:11:47.569 "data_offset": 0, 00:11:47.569 "data_size": 65536 00:11:47.569 }, 00:11:47.569 { 00:11:47.569 "name": "BaseBdev3", 00:11:47.569 "uuid": "8f47f584-4225-11ef-aa83-81fbc7dfef58", 00:11:47.569 "is_configured": true, 00:11:47.569 "data_offset": 0, 00:11:47.569 "data_size": 65536 00:11:47.569 } 00:11:47.569 ] 00:11:47.569 }' 00:11:47.569 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:47.569 21:10:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.135 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.135 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:48.135 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:11:48.135 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:11:48.393 [2024-07-14 21:10:59.889700] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:11:48.394 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:48.394 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:48.394 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:48.394 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:48.394 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:48.394 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:48.394 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:48.394 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:48.394 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:48.394 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:48.394 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.394 21:10:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:48.652 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:48.652 "name": "Existed_Raid", 00:11:48.652 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:48.652 "strip_size_kb": 0, 00:11:48.652 "state": "configuring", 00:11:48.652 "raid_level": "raid1", 00:11:48.652 "superblock": false, 00:11:48.652 "num_base_bdevs": 3, 00:11:48.652 "num_base_bdevs_discovered": 1, 00:11:48.652 "num_base_bdevs_operational": 3, 00:11:48.652 "base_bdevs_list": [ 00:11:48.652 { 00:11:48.652 "name": "BaseBdev1", 00:11:48.652 "uuid": "90cfd24b-4225-11ef-aa83-81fbc7dfef58", 00:11:48.652 "is_configured": true, 00:11:48.652 "data_offset": 0, 00:11:48.652 "data_size": 65536 00:11:48.652 }, 00:11:48.652 { 00:11:48.652 "name": null, 00:11:48.652 "uuid": "8edef975-4225-11ef-aa83-81fbc7dfef58", 00:11:48.652 "is_configured": false, 00:11:48.652 "data_offset": 0, 00:11:48.652 "data_size": 65536 00:11:48.652 }, 00:11:48.652 { 00:11:48.652 "name": null, 00:11:48.652 "uuid": "8f47f584-4225-11ef-aa83-81fbc7dfef58", 00:11:48.652 "is_configured": false, 00:11:48.652 "data_offset": 0, 00:11:48.652 "data_size": 65536 00:11:48.652 } 00:11:48.652 ] 00:11:48.652 }' 00:11:48.652 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:48.652 21:11:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:48.910 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:48.910 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:49.169 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:11:49.169 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:11:49.428 [2024-07-14 21:11:00.957739] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:11:49.428 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:49.428 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:49.428 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:49.428 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:49.428 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:49.428 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:49.686 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:49.686 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:49.686 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:49.686 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:49.686 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.686 21:11:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:49.686 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:49.686 "name": "Existed_Raid", 00:11:49.686 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:49.686 "strip_size_kb": 0, 00:11:49.686 "state": "configuring", 00:11:49.686 "raid_level": "raid1", 00:11:49.686 "superblock": false, 00:11:49.686 "num_base_bdevs": 3, 00:11:49.686 "num_base_bdevs_discovered": 2, 00:11:49.686 "num_base_bdevs_operational": 3, 00:11:49.686 "base_bdevs_list": [ 00:11:49.686 { 00:11:49.686 "name": "BaseBdev1", 00:11:49.686 "uuid": "90cfd24b-4225-11ef-aa83-81fbc7dfef58", 00:11:49.686 "is_configured": true, 00:11:49.686 "data_offset": 0, 00:11:49.686 "data_size": 65536 00:11:49.686 }, 00:11:49.686 { 00:11:49.686 "name": null, 00:11:49.686 "uuid": "8edef975-4225-11ef-aa83-81fbc7dfef58", 00:11:49.686 "is_configured": false, 00:11:49.686 "data_offset": 0, 00:11:49.686 "data_size": 65536 00:11:49.686 }, 00:11:49.686 { 00:11:49.686 "name": "BaseBdev3", 00:11:49.686 "uuid": "8f47f584-4225-11ef-aa83-81fbc7dfef58", 00:11:49.686 "is_configured": true, 00:11:49.686 "data_offset": 0, 00:11:49.686 "data_size": 65536 00:11:49.686 } 00:11:49.686 ] 00:11:49.686 }' 00:11:49.686 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:49.686 21:11:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:49.945 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:49.945 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:11:50.203 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:11:50.203 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:11:50.461 [2024-07-14 21:11:01.857778] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:11:50.461 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:50.461 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:50.461 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:50.461 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:50.461 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:50.461 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:50.461 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:50.461 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:50.461 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:50.461 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:50.461 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.461 21:11:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:50.719 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:50.719 "name": "Existed_Raid", 00:11:50.719 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:50.719 "strip_size_kb": 0, 00:11:50.719 "state": "configuring", 00:11:50.719 "raid_level": "raid1", 00:11:50.719 "superblock": false, 00:11:50.719 "num_base_bdevs": 3, 00:11:50.719 "num_base_bdevs_discovered": 1, 00:11:50.719 "num_base_bdevs_operational": 3, 00:11:50.719 "base_bdevs_list": [ 00:11:50.719 { 00:11:50.719 "name": null, 00:11:50.719 "uuid": "90cfd24b-4225-11ef-aa83-81fbc7dfef58", 00:11:50.719 "is_configured": false, 00:11:50.719 "data_offset": 0, 00:11:50.719 "data_size": 65536 00:11:50.719 }, 00:11:50.719 { 00:11:50.719 "name": null, 00:11:50.719 "uuid": "8edef975-4225-11ef-aa83-81fbc7dfef58", 00:11:50.719 "is_configured": false, 00:11:50.719 "data_offset": 0, 00:11:50.719 "data_size": 65536 00:11:50.719 }, 00:11:50.719 { 00:11:50.719 "name": "BaseBdev3", 00:11:50.719 "uuid": "8f47f584-4225-11ef-aa83-81fbc7dfef58", 00:11:50.719 "is_configured": true, 00:11:50.719 "data_offset": 0, 00:11:50.719 "data_size": 65536 00:11:50.719 } 00:11:50.719 ] 00:11:50.719 }' 00:11:50.719 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:50.719 21:11:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:50.978 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:50.978 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:11:51.236 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:11:51.236 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:11:51.494 [2024-07-14 21:11:02.915680] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:11:51.494 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:51.494 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:51.494 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:51.494 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:51.494 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:51.494 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:51.494 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:51.494 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:51.494 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:51.494 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:51.494 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:51.494 21:11:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:51.770 21:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:51.770 "name": "Existed_Raid", 00:11:51.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:51.770 "strip_size_kb": 0, 00:11:51.770 "state": "configuring", 00:11:51.770 "raid_level": "raid1", 00:11:51.770 "superblock": false, 00:11:51.770 "num_base_bdevs": 3, 00:11:51.770 "num_base_bdevs_discovered": 2, 00:11:51.770 "num_base_bdevs_operational": 3, 00:11:51.770 "base_bdevs_list": [ 00:11:51.770 { 00:11:51.770 "name": null, 00:11:51.770 "uuid": "90cfd24b-4225-11ef-aa83-81fbc7dfef58", 00:11:51.770 "is_configured": false, 00:11:51.770 "data_offset": 0, 00:11:51.770 "data_size": 65536 00:11:51.770 }, 00:11:51.770 { 00:11:51.770 "name": "BaseBdev2", 00:11:51.770 "uuid": "8edef975-4225-11ef-aa83-81fbc7dfef58", 00:11:51.770 "is_configured": true, 00:11:51.770 "data_offset": 0, 00:11:51.770 "data_size": 65536 00:11:51.770 }, 00:11:51.770 { 00:11:51.770 "name": "BaseBdev3", 00:11:51.770 "uuid": "8f47f584-4225-11ef-aa83-81fbc7dfef58", 00:11:51.770 "is_configured": true, 00:11:51.770 "data_offset": 0, 00:11:51.770 "data_size": 65536 00:11:51.770 } 00:11:51.770 ] 00:11:51.770 }' 00:11:51.770 21:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:51.770 21:11:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:52.042 21:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.042 21:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:11:52.299 21:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:11:52.299 21:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:52.299 21:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:11:52.556 21:11:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 90cfd24b-4225-11ef-aa83-81fbc7dfef58 00:11:52.814 [2024-07-14 21:11:04.171898] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:11:52.814 [2024-07-14 21:11:04.171922] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xcde72634f00 00:11:52.814 [2024-07-14 21:11:04.171942] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:11:52.814 [2024-07-14 21:11:04.172013] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xcde72697e20 00:11:52.814 [2024-07-14 21:11:04.172098] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xcde72634f00 00:11:52.814 [2024-07-14 21:11:04.172103] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xcde72634f00 00:11:52.814 [2024-07-14 21:11:04.172146] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:11:52.814 NewBaseBdev 00:11:52.814 21:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:11:52.814 21:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:11:52.814 21:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:52.814 21:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:11:52.814 21:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:52.814 21:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:52.814 21:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:53.072 21:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:11:53.330 [ 00:11:53.330 { 00:11:53.330 "name": "NewBaseBdev", 00:11:53.330 "aliases": [ 00:11:53.330 "90cfd24b-4225-11ef-aa83-81fbc7dfef58" 00:11:53.330 ], 00:11:53.330 "product_name": "Malloc disk", 00:11:53.330 "block_size": 512, 00:11:53.330 "num_blocks": 65536, 00:11:53.330 "uuid": "90cfd24b-4225-11ef-aa83-81fbc7dfef58", 00:11:53.330 "assigned_rate_limits": { 00:11:53.330 "rw_ios_per_sec": 0, 00:11:53.330 "rw_mbytes_per_sec": 0, 00:11:53.330 "r_mbytes_per_sec": 0, 00:11:53.330 "w_mbytes_per_sec": 0 00:11:53.330 }, 00:11:53.330 "claimed": true, 00:11:53.330 "claim_type": "exclusive_write", 00:11:53.330 "zoned": false, 00:11:53.330 "supported_io_types": { 00:11:53.330 "read": true, 00:11:53.330 "write": true, 00:11:53.330 "unmap": true, 00:11:53.330 "flush": true, 00:11:53.330 "reset": true, 00:11:53.330 "nvme_admin": false, 00:11:53.330 "nvme_io": false, 00:11:53.330 "nvme_io_md": false, 00:11:53.330 "write_zeroes": true, 00:11:53.330 "zcopy": true, 00:11:53.330 "get_zone_info": false, 00:11:53.330 "zone_management": false, 00:11:53.330 "zone_append": false, 00:11:53.330 "compare": false, 00:11:53.330 "compare_and_write": false, 00:11:53.330 "abort": true, 00:11:53.330 "seek_hole": false, 00:11:53.330 "seek_data": false, 00:11:53.330 "copy": true, 00:11:53.330 "nvme_iov_md": false 00:11:53.330 }, 00:11:53.330 "memory_domains": [ 00:11:53.330 { 00:11:53.330 "dma_device_id": "system", 00:11:53.330 "dma_device_type": 1 00:11:53.330 }, 00:11:53.330 { 00:11:53.330 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:53.330 "dma_device_type": 2 00:11:53.330 } 00:11:53.330 ], 00:11:53.330 "driver_specific": {} 00:11:53.330 } 00:11:53.330 ] 00:11:53.330 21:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:11:53.330 21:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:11:53.330 21:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:53.330 21:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:11:53.330 21:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:53.330 21:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:53.330 21:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:53.330 21:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:53.330 21:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:53.330 21:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:53.330 21:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:53.330 21:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:53.330 21:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:53.588 21:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:53.588 "name": "Existed_Raid", 00:11:53.588 "uuid": "9456e0b3-4225-11ef-aa83-81fbc7dfef58", 00:11:53.588 "strip_size_kb": 0, 00:11:53.588 "state": "online", 00:11:53.588 "raid_level": "raid1", 00:11:53.588 "superblock": false, 00:11:53.588 "num_base_bdevs": 3, 00:11:53.588 "num_base_bdevs_discovered": 3, 00:11:53.588 "num_base_bdevs_operational": 3, 00:11:53.588 "base_bdevs_list": [ 00:11:53.588 { 00:11:53.588 "name": "NewBaseBdev", 00:11:53.588 "uuid": "90cfd24b-4225-11ef-aa83-81fbc7dfef58", 00:11:53.588 "is_configured": true, 00:11:53.588 "data_offset": 0, 00:11:53.588 "data_size": 65536 00:11:53.588 }, 00:11:53.588 { 00:11:53.588 "name": "BaseBdev2", 00:11:53.588 "uuid": "8edef975-4225-11ef-aa83-81fbc7dfef58", 00:11:53.588 "is_configured": true, 00:11:53.588 "data_offset": 0, 00:11:53.588 "data_size": 65536 00:11:53.588 }, 00:11:53.588 { 00:11:53.588 "name": "BaseBdev3", 00:11:53.588 "uuid": "8f47f584-4225-11ef-aa83-81fbc7dfef58", 00:11:53.588 "is_configured": true, 00:11:53.588 "data_offset": 0, 00:11:53.588 "data_size": 65536 00:11:53.588 } 00:11:53.588 ] 00:11:53.588 }' 00:11:53.588 21:11:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:53.588 21:11:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:53.847 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:11:53.847 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:11:53.847 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:11:53.847 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:11:53.847 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:11:53.847 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:11:53.847 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:11:53.847 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:11:54.106 [2024-07-14 21:11:05.491823] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:11:54.106 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:11:54.106 "name": "Existed_Raid", 00:11:54.106 "aliases": [ 00:11:54.106 "9456e0b3-4225-11ef-aa83-81fbc7dfef58" 00:11:54.106 ], 00:11:54.106 "product_name": "Raid Volume", 00:11:54.106 "block_size": 512, 00:11:54.106 "num_blocks": 65536, 00:11:54.106 "uuid": "9456e0b3-4225-11ef-aa83-81fbc7dfef58", 00:11:54.106 "assigned_rate_limits": { 00:11:54.106 "rw_ios_per_sec": 0, 00:11:54.106 "rw_mbytes_per_sec": 0, 00:11:54.106 "r_mbytes_per_sec": 0, 00:11:54.106 "w_mbytes_per_sec": 0 00:11:54.106 }, 00:11:54.106 "claimed": false, 00:11:54.106 "zoned": false, 00:11:54.106 "supported_io_types": { 00:11:54.106 "read": true, 00:11:54.106 "write": true, 00:11:54.106 "unmap": false, 00:11:54.106 "flush": false, 00:11:54.106 "reset": true, 00:11:54.106 "nvme_admin": false, 00:11:54.106 "nvme_io": false, 00:11:54.106 "nvme_io_md": false, 00:11:54.106 "write_zeroes": true, 00:11:54.106 "zcopy": false, 00:11:54.106 "get_zone_info": false, 00:11:54.106 "zone_management": false, 00:11:54.106 "zone_append": false, 00:11:54.106 "compare": false, 00:11:54.106 "compare_and_write": false, 00:11:54.106 "abort": false, 00:11:54.106 "seek_hole": false, 00:11:54.106 "seek_data": false, 00:11:54.106 "copy": false, 00:11:54.106 "nvme_iov_md": false 00:11:54.106 }, 00:11:54.106 "memory_domains": [ 00:11:54.106 { 00:11:54.106 "dma_device_id": "system", 00:11:54.106 "dma_device_type": 1 00:11:54.106 }, 00:11:54.106 { 00:11:54.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.106 "dma_device_type": 2 00:11:54.106 }, 00:11:54.106 { 00:11:54.106 "dma_device_id": "system", 00:11:54.106 "dma_device_type": 1 00:11:54.106 }, 00:11:54.106 { 00:11:54.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.106 "dma_device_type": 2 00:11:54.106 }, 00:11:54.106 { 00:11:54.106 "dma_device_id": "system", 00:11:54.106 "dma_device_type": 1 00:11:54.106 }, 00:11:54.106 { 00:11:54.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.106 "dma_device_type": 2 00:11:54.106 } 00:11:54.106 ], 00:11:54.106 "driver_specific": { 00:11:54.106 "raid": { 00:11:54.106 "uuid": "9456e0b3-4225-11ef-aa83-81fbc7dfef58", 00:11:54.106 "strip_size_kb": 0, 00:11:54.106 "state": "online", 00:11:54.106 "raid_level": "raid1", 00:11:54.106 "superblock": false, 00:11:54.106 "num_base_bdevs": 3, 00:11:54.106 "num_base_bdevs_discovered": 3, 00:11:54.106 "num_base_bdevs_operational": 3, 00:11:54.106 "base_bdevs_list": [ 00:11:54.106 { 00:11:54.106 "name": "NewBaseBdev", 00:11:54.106 "uuid": "90cfd24b-4225-11ef-aa83-81fbc7dfef58", 00:11:54.106 "is_configured": true, 00:11:54.106 "data_offset": 0, 00:11:54.106 "data_size": 65536 00:11:54.106 }, 00:11:54.106 { 00:11:54.106 "name": "BaseBdev2", 00:11:54.106 "uuid": "8edef975-4225-11ef-aa83-81fbc7dfef58", 00:11:54.106 "is_configured": true, 00:11:54.106 "data_offset": 0, 00:11:54.106 "data_size": 65536 00:11:54.106 }, 00:11:54.106 { 00:11:54.106 "name": "BaseBdev3", 00:11:54.106 "uuid": "8f47f584-4225-11ef-aa83-81fbc7dfef58", 00:11:54.106 "is_configured": true, 00:11:54.106 "data_offset": 0, 00:11:54.106 "data_size": 65536 00:11:54.106 } 00:11:54.106 ] 00:11:54.106 } 00:11:54.106 } 00:11:54.106 }' 00:11:54.106 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:11:54.106 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:11:54.106 BaseBdev2 00:11:54.106 BaseBdev3' 00:11:54.106 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:54.106 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:11:54.106 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:54.365 "name": "NewBaseBdev", 00:11:54.365 "aliases": [ 00:11:54.365 "90cfd24b-4225-11ef-aa83-81fbc7dfef58" 00:11:54.365 ], 00:11:54.365 "product_name": "Malloc disk", 00:11:54.365 "block_size": 512, 00:11:54.365 "num_blocks": 65536, 00:11:54.365 "uuid": "90cfd24b-4225-11ef-aa83-81fbc7dfef58", 00:11:54.365 "assigned_rate_limits": { 00:11:54.365 "rw_ios_per_sec": 0, 00:11:54.365 "rw_mbytes_per_sec": 0, 00:11:54.365 "r_mbytes_per_sec": 0, 00:11:54.365 "w_mbytes_per_sec": 0 00:11:54.365 }, 00:11:54.365 "claimed": true, 00:11:54.365 "claim_type": "exclusive_write", 00:11:54.365 "zoned": false, 00:11:54.365 "supported_io_types": { 00:11:54.365 "read": true, 00:11:54.365 "write": true, 00:11:54.365 "unmap": true, 00:11:54.365 "flush": true, 00:11:54.365 "reset": true, 00:11:54.365 "nvme_admin": false, 00:11:54.365 "nvme_io": false, 00:11:54.365 "nvme_io_md": false, 00:11:54.365 "write_zeroes": true, 00:11:54.365 "zcopy": true, 00:11:54.365 "get_zone_info": false, 00:11:54.365 "zone_management": false, 00:11:54.365 "zone_append": false, 00:11:54.365 "compare": false, 00:11:54.365 "compare_and_write": false, 00:11:54.365 "abort": true, 00:11:54.365 "seek_hole": false, 00:11:54.365 "seek_data": false, 00:11:54.365 "copy": true, 00:11:54.365 "nvme_iov_md": false 00:11:54.365 }, 00:11:54.365 "memory_domains": [ 00:11:54.365 { 00:11:54.365 "dma_device_id": "system", 00:11:54.365 "dma_device_type": 1 00:11:54.365 }, 00:11:54.365 { 00:11:54.365 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.365 "dma_device_type": 2 00:11:54.365 } 00:11:54.365 ], 00:11:54.365 "driver_specific": {} 00:11:54.365 }' 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:11:54.365 21:11:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:54.624 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:54.624 "name": "BaseBdev2", 00:11:54.624 "aliases": [ 00:11:54.624 "8edef975-4225-11ef-aa83-81fbc7dfef58" 00:11:54.624 ], 00:11:54.624 "product_name": "Malloc disk", 00:11:54.624 "block_size": 512, 00:11:54.624 "num_blocks": 65536, 00:11:54.624 "uuid": "8edef975-4225-11ef-aa83-81fbc7dfef58", 00:11:54.624 "assigned_rate_limits": { 00:11:54.624 "rw_ios_per_sec": 0, 00:11:54.624 "rw_mbytes_per_sec": 0, 00:11:54.624 "r_mbytes_per_sec": 0, 00:11:54.624 "w_mbytes_per_sec": 0 00:11:54.624 }, 00:11:54.624 "claimed": true, 00:11:54.624 "claim_type": "exclusive_write", 00:11:54.624 "zoned": false, 00:11:54.624 "supported_io_types": { 00:11:54.624 "read": true, 00:11:54.624 "write": true, 00:11:54.624 "unmap": true, 00:11:54.624 "flush": true, 00:11:54.624 "reset": true, 00:11:54.624 "nvme_admin": false, 00:11:54.625 "nvme_io": false, 00:11:54.625 "nvme_io_md": false, 00:11:54.625 "write_zeroes": true, 00:11:54.625 "zcopy": true, 00:11:54.625 "get_zone_info": false, 00:11:54.625 "zone_management": false, 00:11:54.625 "zone_append": false, 00:11:54.625 "compare": false, 00:11:54.625 "compare_and_write": false, 00:11:54.625 "abort": true, 00:11:54.625 "seek_hole": false, 00:11:54.625 "seek_data": false, 00:11:54.625 "copy": true, 00:11:54.625 "nvme_iov_md": false 00:11:54.625 }, 00:11:54.625 "memory_domains": [ 00:11:54.625 { 00:11:54.625 "dma_device_id": "system", 00:11:54.625 "dma_device_type": 1 00:11:54.625 }, 00:11:54.625 { 00:11:54.625 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.625 "dma_device_type": 2 00:11:54.625 } 00:11:54.625 ], 00:11:54.625 "driver_specific": {} 00:11:54.625 }' 00:11:54.625 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.625 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.625 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:54.625 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.625 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.625 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:54.625 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.625 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.625 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:54.625 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.625 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.625 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:54.625 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:11:54.625 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:11:54.625 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:11:54.886 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:11:54.886 "name": "BaseBdev3", 00:11:54.886 "aliases": [ 00:11:54.886 "8f47f584-4225-11ef-aa83-81fbc7dfef58" 00:11:54.886 ], 00:11:54.886 "product_name": "Malloc disk", 00:11:54.886 "block_size": 512, 00:11:54.886 "num_blocks": 65536, 00:11:54.886 "uuid": "8f47f584-4225-11ef-aa83-81fbc7dfef58", 00:11:54.886 "assigned_rate_limits": { 00:11:54.886 "rw_ios_per_sec": 0, 00:11:54.886 "rw_mbytes_per_sec": 0, 00:11:54.886 "r_mbytes_per_sec": 0, 00:11:54.886 "w_mbytes_per_sec": 0 00:11:54.886 }, 00:11:54.886 "claimed": true, 00:11:54.886 "claim_type": "exclusive_write", 00:11:54.886 "zoned": false, 00:11:54.886 "supported_io_types": { 00:11:54.886 "read": true, 00:11:54.886 "write": true, 00:11:54.886 "unmap": true, 00:11:54.886 "flush": true, 00:11:54.886 "reset": true, 00:11:54.886 "nvme_admin": false, 00:11:54.886 "nvme_io": false, 00:11:54.886 "nvme_io_md": false, 00:11:54.886 "write_zeroes": true, 00:11:54.886 "zcopy": true, 00:11:54.886 "get_zone_info": false, 00:11:54.886 "zone_management": false, 00:11:54.886 "zone_append": false, 00:11:54.886 "compare": false, 00:11:54.886 "compare_and_write": false, 00:11:54.886 "abort": true, 00:11:54.886 "seek_hole": false, 00:11:54.886 "seek_data": false, 00:11:54.886 "copy": true, 00:11:54.886 "nvme_iov_md": false 00:11:54.886 }, 00:11:54.886 "memory_domains": [ 00:11:54.886 { 00:11:54.886 "dma_device_id": "system", 00:11:54.886 "dma_device_type": 1 00:11:54.886 }, 00:11:54.886 { 00:11:54.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:54.886 "dma_device_type": 2 00:11:54.886 } 00:11:54.886 ], 00:11:54.886 "driver_specific": {} 00:11:54.886 }' 00:11:54.886 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.886 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:11:54.886 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:11:54.886 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.886 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:11:54.886 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:11:54.886 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.886 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:11:54.886 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:11:54.886 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:54.886 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:11:55.191 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:11:55.191 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:55.191 [2024-07-14 21:11:06.683851] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:55.191 [2024-07-14 21:11:06.683868] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:11:55.191 [2024-07-14 21:11:06.683905] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:11:55.191 [2024-07-14 21:11:06.684040] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:11:55.191 [2024-07-14 21:11:06.684046] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xcde72634f00 name Existed_Raid, state offline 00:11:55.191 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 56039 00:11:55.191 21:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 56039 ']' 00:11:55.191 21:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 56039 00:11:55.191 21:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:11:55.191 21:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:11:55.191 21:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:11:55.191 21:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 56039 00:11:55.191 21:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:11:55.191 21:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:11:55.191 21:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 56039' 00:11:55.191 killing process with pid 56039 00:11:55.191 21:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 56039 00:11:55.191 [2024-07-14 21:11:06.709936] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:11:55.191 21:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 56039 00:11:55.475 [2024-07-14 21:11:06.727376] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:11:55.475 00:11:55.475 real 0m22.833s 00:11:55.475 user 0m41.644s 00:11:55.475 sys 0m3.194s 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:11:55.475 ************************************ 00:11:55.475 END TEST raid_state_function_test 00:11:55.475 ************************************ 00:11:55.475 21:11:06 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:11:55.475 21:11:06 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 3 true 00:11:55.475 21:11:06 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:11:55.475 21:11:06 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.475 21:11:06 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:11:55.475 ************************************ 00:11:55.475 START TEST raid_state_function_test_sb 00:11:55.475 ************************************ 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 3 true 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=3 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=56764 00:11:55.475 Process raid pid: 56764 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 56764' 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 56764 /var/tmp/spdk-raid.sock 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 56764 ']' 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:55.475 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:11:55.475 21:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:11:55.476 21:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:55.476 21:11:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:55.476 [2024-07-14 21:11:06.976102] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:11:55.476 [2024-07-14 21:11:06.976439] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:11:56.043 EAL: TSC is not safe to use in SMP mode 00:11:56.043 EAL: TSC is not invariant 00:11:56.043 [2024-07-14 21:11:07.509387] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:56.302 [2024-07-14 21:11:07.615772] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:11:56.302 [2024-07-14 21:11:07.618434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:56.302 [2024-07-14 21:11:07.619403] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.302 [2024-07-14 21:11:07.619421] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:11:56.561 21:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:56.561 21:11:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:11:56.561 21:11:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:56.819 [2024-07-14 21:11:08.165773] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:56.819 [2024-07-14 21:11:08.165829] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:56.819 [2024-07-14 21:11:08.165833] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:56.819 [2024-07-14 21:11:08.165857] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:56.819 [2024-07-14 21:11:08.165860] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:56.819 [2024-07-14 21:11:08.165866] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:56.819 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:56.819 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:56.819 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:56.820 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:56.820 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:56.820 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:56.820 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:56.820 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:56.820 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:56.820 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:56.820 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:56.820 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:57.079 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:57.079 "name": "Existed_Raid", 00:11:57.079 "uuid": "96b84948-4225-11ef-aa83-81fbc7dfef58", 00:11:57.079 "strip_size_kb": 0, 00:11:57.079 "state": "configuring", 00:11:57.079 "raid_level": "raid1", 00:11:57.079 "superblock": true, 00:11:57.079 "num_base_bdevs": 3, 00:11:57.079 "num_base_bdevs_discovered": 0, 00:11:57.079 "num_base_bdevs_operational": 3, 00:11:57.079 "base_bdevs_list": [ 00:11:57.079 { 00:11:57.079 "name": "BaseBdev1", 00:11:57.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.079 "is_configured": false, 00:11:57.079 "data_offset": 0, 00:11:57.079 "data_size": 0 00:11:57.079 }, 00:11:57.079 { 00:11:57.079 "name": "BaseBdev2", 00:11:57.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.079 "is_configured": false, 00:11:57.079 "data_offset": 0, 00:11:57.079 "data_size": 0 00:11:57.079 }, 00:11:57.079 { 00:11:57.079 "name": "BaseBdev3", 00:11:57.079 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:57.079 "is_configured": false, 00:11:57.079 "data_offset": 0, 00:11:57.079 "data_size": 0 00:11:57.079 } 00:11:57.079 ] 00:11:57.079 }' 00:11:57.079 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:57.079 21:11:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:57.339 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:57.599 [2024-07-14 21:11:08.965796] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:57.599 [2024-07-14 21:11:08.965814] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a6568434500 name Existed_Raid, state configuring 00:11:57.599 21:11:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:57.856 [2024-07-14 21:11:09.233795] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:11:57.856 [2024-07-14 21:11:09.233845] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:11:57.856 [2024-07-14 21:11:09.233850] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:57.856 [2024-07-14 21:11:09.233873] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:57.856 [2024-07-14 21:11:09.233876] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:57.856 [2024-07-14 21:11:09.233882] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:57.857 21:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:11:58.114 [2024-07-14 21:11:09.498725] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:58.114 BaseBdev1 00:11:58.114 21:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:11:58.114 21:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:11:58.114 21:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:11:58.114 21:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:11:58.114 21:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:11:58.114 21:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:11:58.114 21:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:11:58.371 21:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:11:58.629 [ 00:11:58.629 { 00:11:58.629 "name": "BaseBdev1", 00:11:58.629 "aliases": [ 00:11:58.629 "97838a21-4225-11ef-aa83-81fbc7dfef58" 00:11:58.629 ], 00:11:58.629 "product_name": "Malloc disk", 00:11:58.629 "block_size": 512, 00:11:58.629 "num_blocks": 65536, 00:11:58.629 "uuid": "97838a21-4225-11ef-aa83-81fbc7dfef58", 00:11:58.629 "assigned_rate_limits": { 00:11:58.629 "rw_ios_per_sec": 0, 00:11:58.629 "rw_mbytes_per_sec": 0, 00:11:58.629 "r_mbytes_per_sec": 0, 00:11:58.629 "w_mbytes_per_sec": 0 00:11:58.629 }, 00:11:58.629 "claimed": true, 00:11:58.629 "claim_type": "exclusive_write", 00:11:58.629 "zoned": false, 00:11:58.629 "supported_io_types": { 00:11:58.629 "read": true, 00:11:58.629 "write": true, 00:11:58.629 "unmap": true, 00:11:58.629 "flush": true, 00:11:58.629 "reset": true, 00:11:58.629 "nvme_admin": false, 00:11:58.629 "nvme_io": false, 00:11:58.629 "nvme_io_md": false, 00:11:58.629 "write_zeroes": true, 00:11:58.629 "zcopy": true, 00:11:58.629 "get_zone_info": false, 00:11:58.629 "zone_management": false, 00:11:58.629 "zone_append": false, 00:11:58.629 "compare": false, 00:11:58.629 "compare_and_write": false, 00:11:58.629 "abort": true, 00:11:58.629 "seek_hole": false, 00:11:58.629 "seek_data": false, 00:11:58.629 "copy": true, 00:11:58.629 "nvme_iov_md": false 00:11:58.629 }, 00:11:58.629 "memory_domains": [ 00:11:58.629 { 00:11:58.629 "dma_device_id": "system", 00:11:58.629 "dma_device_type": 1 00:11:58.629 }, 00:11:58.629 { 00:11:58.629 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:58.629 "dma_device_type": 2 00:11:58.629 } 00:11:58.629 ], 00:11:58.629 "driver_specific": {} 00:11:58.629 } 00:11:58.629 ] 00:11:58.629 21:11:09 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:11:58.629 21:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:58.629 21:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:58.629 21:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:58.629 21:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:58.629 21:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:58.629 21:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:58.629 21:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:58.629 21:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:58.629 21:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:58.629 21:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:58.629 21:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:58.629 21:11:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:58.888 21:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:58.888 "name": "Existed_Raid", 00:11:58.888 "uuid": "975b40f6-4225-11ef-aa83-81fbc7dfef58", 00:11:58.888 "strip_size_kb": 0, 00:11:58.888 "state": "configuring", 00:11:58.888 "raid_level": "raid1", 00:11:58.888 "superblock": true, 00:11:58.888 "num_base_bdevs": 3, 00:11:58.888 "num_base_bdevs_discovered": 1, 00:11:58.888 "num_base_bdevs_operational": 3, 00:11:58.888 "base_bdevs_list": [ 00:11:58.888 { 00:11:58.888 "name": "BaseBdev1", 00:11:58.888 "uuid": "97838a21-4225-11ef-aa83-81fbc7dfef58", 00:11:58.888 "is_configured": true, 00:11:58.888 "data_offset": 2048, 00:11:58.888 "data_size": 63488 00:11:58.888 }, 00:11:58.888 { 00:11:58.888 "name": "BaseBdev2", 00:11:58.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.888 "is_configured": false, 00:11:58.888 "data_offset": 0, 00:11:58.888 "data_size": 0 00:11:58.888 }, 00:11:58.888 { 00:11:58.888 "name": "BaseBdev3", 00:11:58.888 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:58.888 "is_configured": false, 00:11:58.888 "data_offset": 0, 00:11:58.888 "data_size": 0 00:11:58.888 } 00:11:58.888 ] 00:11:58.888 }' 00:11:58.888 21:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:58.888 21:11:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:11:59.146 21:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:11:59.406 [2024-07-14 21:11:10.721827] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:11:59.406 [2024-07-14 21:11:10.721868] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a6568434500 name Existed_Raid, state configuring 00:11:59.406 21:11:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:11:59.664 [2024-07-14 21:11:10.989857] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:11:59.664 [2024-07-14 21:11:10.990762] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:11:59.664 [2024-07-14 21:11:10.990824] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:11:59.664 [2024-07-14 21:11:10.990829] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:11:59.664 [2024-07-14 21:11:10.990853] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:11:59.664 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:11:59.664 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:11:59.664 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:11:59.664 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:11:59.664 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:11:59.664 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:11:59.664 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:11:59.664 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:11:59.664 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:11:59.664 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:11:59.664 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:11:59.664 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:11:59.664 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:11:59.664 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:11:59.923 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:11:59.923 "name": "Existed_Raid", 00:11:59.923 "uuid": "986734eb-4225-11ef-aa83-81fbc7dfef58", 00:11:59.923 "strip_size_kb": 0, 00:11:59.923 "state": "configuring", 00:11:59.923 "raid_level": "raid1", 00:11:59.923 "superblock": true, 00:11:59.923 "num_base_bdevs": 3, 00:11:59.923 "num_base_bdevs_discovered": 1, 00:11:59.923 "num_base_bdevs_operational": 3, 00:11:59.923 "base_bdevs_list": [ 00:11:59.923 { 00:11:59.923 "name": "BaseBdev1", 00:11:59.923 "uuid": "97838a21-4225-11ef-aa83-81fbc7dfef58", 00:11:59.923 "is_configured": true, 00:11:59.923 "data_offset": 2048, 00:11:59.923 "data_size": 63488 00:11:59.923 }, 00:11:59.923 { 00:11:59.923 "name": "BaseBdev2", 00:11:59.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.923 "is_configured": false, 00:11:59.923 "data_offset": 0, 00:11:59.923 "data_size": 0 00:11:59.923 }, 00:11:59.923 { 00:11:59.923 "name": "BaseBdev3", 00:11:59.923 "uuid": "00000000-0000-0000-0000-000000000000", 00:11:59.923 "is_configured": false, 00:11:59.923 "data_offset": 0, 00:11:59.923 "data_size": 0 00:11:59.923 } 00:11:59.923 ] 00:11:59.923 }' 00:11:59.923 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:11:59.923 21:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:00.181 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:00.439 [2024-07-14 21:11:11.834028] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:00.439 BaseBdev2 00:12:00.439 21:11:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:00.439 21:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:00.439 21:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:00.439 21:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:00.439 21:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:00.439 21:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:00.439 21:11:11 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:00.697 21:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:00.955 [ 00:12:00.955 { 00:12:00.955 "name": "BaseBdev2", 00:12:00.955 "aliases": [ 00:12:00.955 "98e7ffe4-4225-11ef-aa83-81fbc7dfef58" 00:12:00.955 ], 00:12:00.955 "product_name": "Malloc disk", 00:12:00.955 "block_size": 512, 00:12:00.955 "num_blocks": 65536, 00:12:00.955 "uuid": "98e7ffe4-4225-11ef-aa83-81fbc7dfef58", 00:12:00.955 "assigned_rate_limits": { 00:12:00.955 "rw_ios_per_sec": 0, 00:12:00.955 "rw_mbytes_per_sec": 0, 00:12:00.955 "r_mbytes_per_sec": 0, 00:12:00.955 "w_mbytes_per_sec": 0 00:12:00.955 }, 00:12:00.955 "claimed": true, 00:12:00.955 "claim_type": "exclusive_write", 00:12:00.955 "zoned": false, 00:12:00.955 "supported_io_types": { 00:12:00.955 "read": true, 00:12:00.955 "write": true, 00:12:00.955 "unmap": true, 00:12:00.955 "flush": true, 00:12:00.955 "reset": true, 00:12:00.955 "nvme_admin": false, 00:12:00.955 "nvme_io": false, 00:12:00.955 "nvme_io_md": false, 00:12:00.955 "write_zeroes": true, 00:12:00.955 "zcopy": true, 00:12:00.955 "get_zone_info": false, 00:12:00.955 "zone_management": false, 00:12:00.955 "zone_append": false, 00:12:00.955 "compare": false, 00:12:00.955 "compare_and_write": false, 00:12:00.955 "abort": true, 00:12:00.955 "seek_hole": false, 00:12:00.955 "seek_data": false, 00:12:00.955 "copy": true, 00:12:00.955 "nvme_iov_md": false 00:12:00.955 }, 00:12:00.955 "memory_domains": [ 00:12:00.955 { 00:12:00.955 "dma_device_id": "system", 00:12:00.955 "dma_device_type": 1 00:12:00.955 }, 00:12:00.955 { 00:12:00.955 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:00.955 "dma_device_type": 2 00:12:00.955 } 00:12:00.955 ], 00:12:00.955 "driver_specific": {} 00:12:00.955 } 00:12:00.955 ] 00:12:00.955 21:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:00.955 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:00.955 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:00.955 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:00.955 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:00.955 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:00.955 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:00.955 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:00.955 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:00.955 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:00.955 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:00.955 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:00.955 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:00.955 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:00.955 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:01.213 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:01.213 "name": "Existed_Raid", 00:12:01.213 "uuid": "986734eb-4225-11ef-aa83-81fbc7dfef58", 00:12:01.213 "strip_size_kb": 0, 00:12:01.213 "state": "configuring", 00:12:01.213 "raid_level": "raid1", 00:12:01.213 "superblock": true, 00:12:01.213 "num_base_bdevs": 3, 00:12:01.213 "num_base_bdevs_discovered": 2, 00:12:01.213 "num_base_bdevs_operational": 3, 00:12:01.213 "base_bdevs_list": [ 00:12:01.213 { 00:12:01.213 "name": "BaseBdev1", 00:12:01.213 "uuid": "97838a21-4225-11ef-aa83-81fbc7dfef58", 00:12:01.213 "is_configured": true, 00:12:01.213 "data_offset": 2048, 00:12:01.213 "data_size": 63488 00:12:01.213 }, 00:12:01.213 { 00:12:01.213 "name": "BaseBdev2", 00:12:01.213 "uuid": "98e7ffe4-4225-11ef-aa83-81fbc7dfef58", 00:12:01.213 "is_configured": true, 00:12:01.213 "data_offset": 2048, 00:12:01.213 "data_size": 63488 00:12:01.213 }, 00:12:01.213 { 00:12:01.213 "name": "BaseBdev3", 00:12:01.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:01.213 "is_configured": false, 00:12:01.213 "data_offset": 0, 00:12:01.213 "data_size": 0 00:12:01.213 } 00:12:01.213 ] 00:12:01.213 }' 00:12:01.213 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:01.213 21:11:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:01.471 21:11:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:01.728 [2024-07-14 21:11:13.074069] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:01.728 [2024-07-14 21:11:13.074127] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a6568434a00 00:12:01.728 [2024-07-14 21:11:13.074133] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:01.728 [2024-07-14 21:11:13.074151] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a6568497e20 00:12:01.728 [2024-07-14 21:11:13.074203] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a6568434a00 00:12:01.728 [2024-07-14 21:11:13.074207] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1a6568434a00 00:12:01.728 [2024-07-14 21:11:13.074228] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:01.728 BaseBdev3 00:12:01.728 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:01.728 21:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:01.728 21:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:01.728 21:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:01.728 21:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:01.728 21:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:01.728 21:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:01.985 21:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:02.243 [ 00:12:02.243 { 00:12:02.243 "name": "BaseBdev3", 00:12:02.243 "aliases": [ 00:12:02.243 "99a5376a-4225-11ef-aa83-81fbc7dfef58" 00:12:02.243 ], 00:12:02.243 "product_name": "Malloc disk", 00:12:02.243 "block_size": 512, 00:12:02.243 "num_blocks": 65536, 00:12:02.243 "uuid": "99a5376a-4225-11ef-aa83-81fbc7dfef58", 00:12:02.243 "assigned_rate_limits": { 00:12:02.243 "rw_ios_per_sec": 0, 00:12:02.243 "rw_mbytes_per_sec": 0, 00:12:02.243 "r_mbytes_per_sec": 0, 00:12:02.243 "w_mbytes_per_sec": 0 00:12:02.243 }, 00:12:02.243 "claimed": true, 00:12:02.243 "claim_type": "exclusive_write", 00:12:02.243 "zoned": false, 00:12:02.243 "supported_io_types": { 00:12:02.243 "read": true, 00:12:02.243 "write": true, 00:12:02.243 "unmap": true, 00:12:02.243 "flush": true, 00:12:02.243 "reset": true, 00:12:02.243 "nvme_admin": false, 00:12:02.243 "nvme_io": false, 00:12:02.243 "nvme_io_md": false, 00:12:02.243 "write_zeroes": true, 00:12:02.243 "zcopy": true, 00:12:02.243 "get_zone_info": false, 00:12:02.243 "zone_management": false, 00:12:02.243 "zone_append": false, 00:12:02.243 "compare": false, 00:12:02.243 "compare_and_write": false, 00:12:02.243 "abort": true, 00:12:02.243 "seek_hole": false, 00:12:02.243 "seek_data": false, 00:12:02.243 "copy": true, 00:12:02.243 "nvme_iov_md": false 00:12:02.243 }, 00:12:02.243 "memory_domains": [ 00:12:02.243 { 00:12:02.243 "dma_device_id": "system", 00:12:02.243 "dma_device_type": 1 00:12:02.243 }, 00:12:02.243 { 00:12:02.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:02.243 "dma_device_type": 2 00:12:02.243 } 00:12:02.243 ], 00:12:02.243 "driver_specific": {} 00:12:02.243 } 00:12:02.243 ] 00:12:02.243 21:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:02.243 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:02.243 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:02.243 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:02.243 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:02.243 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:02.243 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:02.243 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:02.243 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:02.243 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:02.243 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:02.243 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:02.243 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:02.243 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:02.243 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:02.500 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:02.500 "name": "Existed_Raid", 00:12:02.500 "uuid": "986734eb-4225-11ef-aa83-81fbc7dfef58", 00:12:02.500 "strip_size_kb": 0, 00:12:02.500 "state": "online", 00:12:02.500 "raid_level": "raid1", 00:12:02.500 "superblock": true, 00:12:02.500 "num_base_bdevs": 3, 00:12:02.500 "num_base_bdevs_discovered": 3, 00:12:02.500 "num_base_bdevs_operational": 3, 00:12:02.500 "base_bdevs_list": [ 00:12:02.500 { 00:12:02.500 "name": "BaseBdev1", 00:12:02.500 "uuid": "97838a21-4225-11ef-aa83-81fbc7dfef58", 00:12:02.500 "is_configured": true, 00:12:02.500 "data_offset": 2048, 00:12:02.500 "data_size": 63488 00:12:02.500 }, 00:12:02.500 { 00:12:02.500 "name": "BaseBdev2", 00:12:02.500 "uuid": "98e7ffe4-4225-11ef-aa83-81fbc7dfef58", 00:12:02.500 "is_configured": true, 00:12:02.500 "data_offset": 2048, 00:12:02.500 "data_size": 63488 00:12:02.500 }, 00:12:02.500 { 00:12:02.500 "name": "BaseBdev3", 00:12:02.500 "uuid": "99a5376a-4225-11ef-aa83-81fbc7dfef58", 00:12:02.500 "is_configured": true, 00:12:02.500 "data_offset": 2048, 00:12:02.500 "data_size": 63488 00:12:02.500 } 00:12:02.500 ] 00:12:02.500 }' 00:12:02.500 21:11:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:02.500 21:11:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:02.757 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:02.757 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:02.757 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:02.757 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:02.757 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:02.757 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:02.757 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:02.757 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:03.015 [2024-07-14 21:11:14.422024] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:03.015 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:03.015 "name": "Existed_Raid", 00:12:03.015 "aliases": [ 00:12:03.015 "986734eb-4225-11ef-aa83-81fbc7dfef58" 00:12:03.015 ], 00:12:03.015 "product_name": "Raid Volume", 00:12:03.015 "block_size": 512, 00:12:03.015 "num_blocks": 63488, 00:12:03.015 "uuid": "986734eb-4225-11ef-aa83-81fbc7dfef58", 00:12:03.015 "assigned_rate_limits": { 00:12:03.015 "rw_ios_per_sec": 0, 00:12:03.015 "rw_mbytes_per_sec": 0, 00:12:03.015 "r_mbytes_per_sec": 0, 00:12:03.015 "w_mbytes_per_sec": 0 00:12:03.015 }, 00:12:03.015 "claimed": false, 00:12:03.015 "zoned": false, 00:12:03.015 "supported_io_types": { 00:12:03.015 "read": true, 00:12:03.015 "write": true, 00:12:03.015 "unmap": false, 00:12:03.015 "flush": false, 00:12:03.015 "reset": true, 00:12:03.015 "nvme_admin": false, 00:12:03.015 "nvme_io": false, 00:12:03.015 "nvme_io_md": false, 00:12:03.015 "write_zeroes": true, 00:12:03.015 "zcopy": false, 00:12:03.015 "get_zone_info": false, 00:12:03.015 "zone_management": false, 00:12:03.015 "zone_append": false, 00:12:03.015 "compare": false, 00:12:03.015 "compare_and_write": false, 00:12:03.015 "abort": false, 00:12:03.015 "seek_hole": false, 00:12:03.015 "seek_data": false, 00:12:03.015 "copy": false, 00:12:03.015 "nvme_iov_md": false 00:12:03.015 }, 00:12:03.015 "memory_domains": [ 00:12:03.015 { 00:12:03.015 "dma_device_id": "system", 00:12:03.015 "dma_device_type": 1 00:12:03.015 }, 00:12:03.015 { 00:12:03.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.015 "dma_device_type": 2 00:12:03.015 }, 00:12:03.015 { 00:12:03.015 "dma_device_id": "system", 00:12:03.015 "dma_device_type": 1 00:12:03.015 }, 00:12:03.015 { 00:12:03.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.015 "dma_device_type": 2 00:12:03.015 }, 00:12:03.015 { 00:12:03.015 "dma_device_id": "system", 00:12:03.015 "dma_device_type": 1 00:12:03.015 }, 00:12:03.015 { 00:12:03.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.015 "dma_device_type": 2 00:12:03.015 } 00:12:03.015 ], 00:12:03.015 "driver_specific": { 00:12:03.015 "raid": { 00:12:03.015 "uuid": "986734eb-4225-11ef-aa83-81fbc7dfef58", 00:12:03.015 "strip_size_kb": 0, 00:12:03.015 "state": "online", 00:12:03.015 "raid_level": "raid1", 00:12:03.015 "superblock": true, 00:12:03.015 "num_base_bdevs": 3, 00:12:03.015 "num_base_bdevs_discovered": 3, 00:12:03.015 "num_base_bdevs_operational": 3, 00:12:03.015 "base_bdevs_list": [ 00:12:03.015 { 00:12:03.015 "name": "BaseBdev1", 00:12:03.015 "uuid": "97838a21-4225-11ef-aa83-81fbc7dfef58", 00:12:03.015 "is_configured": true, 00:12:03.015 "data_offset": 2048, 00:12:03.015 "data_size": 63488 00:12:03.015 }, 00:12:03.015 { 00:12:03.015 "name": "BaseBdev2", 00:12:03.015 "uuid": "98e7ffe4-4225-11ef-aa83-81fbc7dfef58", 00:12:03.015 "is_configured": true, 00:12:03.015 "data_offset": 2048, 00:12:03.015 "data_size": 63488 00:12:03.015 }, 00:12:03.015 { 00:12:03.015 "name": "BaseBdev3", 00:12:03.015 "uuid": "99a5376a-4225-11ef-aa83-81fbc7dfef58", 00:12:03.015 "is_configured": true, 00:12:03.015 "data_offset": 2048, 00:12:03.015 "data_size": 63488 00:12:03.015 } 00:12:03.015 ] 00:12:03.015 } 00:12:03.015 } 00:12:03.015 }' 00:12:03.015 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:03.015 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:03.015 BaseBdev2 00:12:03.015 BaseBdev3' 00:12:03.015 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:03.015 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:03.015 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:03.273 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:03.273 "name": "BaseBdev1", 00:12:03.273 "aliases": [ 00:12:03.273 "97838a21-4225-11ef-aa83-81fbc7dfef58" 00:12:03.273 ], 00:12:03.273 "product_name": "Malloc disk", 00:12:03.273 "block_size": 512, 00:12:03.273 "num_blocks": 65536, 00:12:03.273 "uuid": "97838a21-4225-11ef-aa83-81fbc7dfef58", 00:12:03.273 "assigned_rate_limits": { 00:12:03.273 "rw_ios_per_sec": 0, 00:12:03.273 "rw_mbytes_per_sec": 0, 00:12:03.273 "r_mbytes_per_sec": 0, 00:12:03.273 "w_mbytes_per_sec": 0 00:12:03.273 }, 00:12:03.273 "claimed": true, 00:12:03.273 "claim_type": "exclusive_write", 00:12:03.273 "zoned": false, 00:12:03.273 "supported_io_types": { 00:12:03.273 "read": true, 00:12:03.273 "write": true, 00:12:03.273 "unmap": true, 00:12:03.273 "flush": true, 00:12:03.273 "reset": true, 00:12:03.273 "nvme_admin": false, 00:12:03.273 "nvme_io": false, 00:12:03.273 "nvme_io_md": false, 00:12:03.273 "write_zeroes": true, 00:12:03.273 "zcopy": true, 00:12:03.274 "get_zone_info": false, 00:12:03.274 "zone_management": false, 00:12:03.274 "zone_append": false, 00:12:03.274 "compare": false, 00:12:03.274 "compare_and_write": false, 00:12:03.274 "abort": true, 00:12:03.274 "seek_hole": false, 00:12:03.274 "seek_data": false, 00:12:03.274 "copy": true, 00:12:03.274 "nvme_iov_md": false 00:12:03.274 }, 00:12:03.274 "memory_domains": [ 00:12:03.274 { 00:12:03.274 "dma_device_id": "system", 00:12:03.274 "dma_device_type": 1 00:12:03.274 }, 00:12:03.274 { 00:12:03.274 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.274 "dma_device_type": 2 00:12:03.274 } 00:12:03.274 ], 00:12:03.274 "driver_specific": {} 00:12:03.274 }' 00:12:03.274 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:03.274 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:03.274 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:03.274 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:03.274 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:03.274 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:03.274 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:03.274 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:03.274 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:03.274 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:03.274 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:03.274 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:03.274 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:03.274 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:03.274 21:11:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:03.532 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:03.532 "name": "BaseBdev2", 00:12:03.532 "aliases": [ 00:12:03.532 "98e7ffe4-4225-11ef-aa83-81fbc7dfef58" 00:12:03.532 ], 00:12:03.532 "product_name": "Malloc disk", 00:12:03.532 "block_size": 512, 00:12:03.532 "num_blocks": 65536, 00:12:03.532 "uuid": "98e7ffe4-4225-11ef-aa83-81fbc7dfef58", 00:12:03.532 "assigned_rate_limits": { 00:12:03.532 "rw_ios_per_sec": 0, 00:12:03.532 "rw_mbytes_per_sec": 0, 00:12:03.532 "r_mbytes_per_sec": 0, 00:12:03.532 "w_mbytes_per_sec": 0 00:12:03.532 }, 00:12:03.532 "claimed": true, 00:12:03.532 "claim_type": "exclusive_write", 00:12:03.532 "zoned": false, 00:12:03.532 "supported_io_types": { 00:12:03.532 "read": true, 00:12:03.532 "write": true, 00:12:03.532 "unmap": true, 00:12:03.532 "flush": true, 00:12:03.532 "reset": true, 00:12:03.532 "nvme_admin": false, 00:12:03.532 "nvme_io": false, 00:12:03.532 "nvme_io_md": false, 00:12:03.532 "write_zeroes": true, 00:12:03.532 "zcopy": true, 00:12:03.532 "get_zone_info": false, 00:12:03.532 "zone_management": false, 00:12:03.532 "zone_append": false, 00:12:03.532 "compare": false, 00:12:03.532 "compare_and_write": false, 00:12:03.532 "abort": true, 00:12:03.532 "seek_hole": false, 00:12:03.532 "seek_data": false, 00:12:03.532 "copy": true, 00:12:03.532 "nvme_iov_md": false 00:12:03.532 }, 00:12:03.532 "memory_domains": [ 00:12:03.532 { 00:12:03.532 "dma_device_id": "system", 00:12:03.532 "dma_device_type": 1 00:12:03.532 }, 00:12:03.532 { 00:12:03.532 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:03.532 "dma_device_type": 2 00:12:03.532 } 00:12:03.532 ], 00:12:03.532 "driver_specific": {} 00:12:03.532 }' 00:12:03.532 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:03.532 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:03.532 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:03.532 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:03.532 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:03.790 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:03.790 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:03.790 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:03.790 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:03.790 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:03.790 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:03.790 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:03.790 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:03.790 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:03.790 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:04.048 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:04.048 "name": "BaseBdev3", 00:12:04.048 "aliases": [ 00:12:04.048 "99a5376a-4225-11ef-aa83-81fbc7dfef58" 00:12:04.048 ], 00:12:04.048 "product_name": "Malloc disk", 00:12:04.048 "block_size": 512, 00:12:04.048 "num_blocks": 65536, 00:12:04.048 "uuid": "99a5376a-4225-11ef-aa83-81fbc7dfef58", 00:12:04.048 "assigned_rate_limits": { 00:12:04.048 "rw_ios_per_sec": 0, 00:12:04.048 "rw_mbytes_per_sec": 0, 00:12:04.048 "r_mbytes_per_sec": 0, 00:12:04.048 "w_mbytes_per_sec": 0 00:12:04.048 }, 00:12:04.048 "claimed": true, 00:12:04.048 "claim_type": "exclusive_write", 00:12:04.048 "zoned": false, 00:12:04.048 "supported_io_types": { 00:12:04.048 "read": true, 00:12:04.048 "write": true, 00:12:04.048 "unmap": true, 00:12:04.048 "flush": true, 00:12:04.048 "reset": true, 00:12:04.048 "nvme_admin": false, 00:12:04.048 "nvme_io": false, 00:12:04.048 "nvme_io_md": false, 00:12:04.048 "write_zeroes": true, 00:12:04.048 "zcopy": true, 00:12:04.048 "get_zone_info": false, 00:12:04.048 "zone_management": false, 00:12:04.048 "zone_append": false, 00:12:04.048 "compare": false, 00:12:04.048 "compare_and_write": false, 00:12:04.048 "abort": true, 00:12:04.048 "seek_hole": false, 00:12:04.048 "seek_data": false, 00:12:04.048 "copy": true, 00:12:04.048 "nvme_iov_md": false 00:12:04.048 }, 00:12:04.048 "memory_domains": [ 00:12:04.048 { 00:12:04.048 "dma_device_id": "system", 00:12:04.048 "dma_device_type": 1 00:12:04.048 }, 00:12:04.048 { 00:12:04.048 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:04.048 "dma_device_type": 2 00:12:04.048 } 00:12:04.048 ], 00:12:04.048 "driver_specific": {} 00:12:04.048 }' 00:12:04.048 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:04.048 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:04.048 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:04.048 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:04.048 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:04.048 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:04.048 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:04.048 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:04.048 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:04.048 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:04.048 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:04.048 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:04.048 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:04.307 [2024-07-14 21:11:15.694086] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.307 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:04.565 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:04.565 "name": "Existed_Raid", 00:12:04.565 "uuid": "986734eb-4225-11ef-aa83-81fbc7dfef58", 00:12:04.565 "strip_size_kb": 0, 00:12:04.565 "state": "online", 00:12:04.565 "raid_level": "raid1", 00:12:04.565 "superblock": true, 00:12:04.565 "num_base_bdevs": 3, 00:12:04.565 "num_base_bdevs_discovered": 2, 00:12:04.565 "num_base_bdevs_operational": 2, 00:12:04.565 "base_bdevs_list": [ 00:12:04.565 { 00:12:04.565 "name": null, 00:12:04.565 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:04.565 "is_configured": false, 00:12:04.565 "data_offset": 2048, 00:12:04.565 "data_size": 63488 00:12:04.565 }, 00:12:04.565 { 00:12:04.565 "name": "BaseBdev2", 00:12:04.565 "uuid": "98e7ffe4-4225-11ef-aa83-81fbc7dfef58", 00:12:04.565 "is_configured": true, 00:12:04.565 "data_offset": 2048, 00:12:04.565 "data_size": 63488 00:12:04.565 }, 00:12:04.565 { 00:12:04.565 "name": "BaseBdev3", 00:12:04.565 "uuid": "99a5376a-4225-11ef-aa83-81fbc7dfef58", 00:12:04.565 "is_configured": true, 00:12:04.565 "data_offset": 2048, 00:12:04.565 "data_size": 63488 00:12:04.565 } 00:12:04.565 ] 00:12:04.565 }' 00:12:04.566 21:11:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:04.566 21:11:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:04.824 21:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:04.824 21:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:04.824 21:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:04.824 21:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:05.082 21:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:05.082 21:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:05.082 21:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:12:05.341 [2024-07-14 21:11:16.780124] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:05.341 21:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:05.341 21:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:05.341 21:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.341 21:11:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:05.599 21:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:05.599 21:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:05.599 21:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:12:05.857 [2024-07-14 21:11:17.286533] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:05.857 [2024-07-14 21:11:17.286583] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:05.857 [2024-07-14 21:11:17.292899] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:05.857 [2024-07-14 21:11:17.292913] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:05.857 [2024-07-14 21:11:17.292933] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a6568434a00 name Existed_Raid, state offline 00:12:05.857 21:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:12:05.857 21:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:05.857 21:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:05.857 21:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:12:06.126 21:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:12:06.126 21:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:12:06.126 21:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 3 -gt 2 ']' 00:12:06.126 21:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:12:06.126 21:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:06.126 21:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:06.394 BaseBdev2 00:12:06.394 21:11:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:12:06.394 21:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:06.394 21:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:06.394 21:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:06.394 21:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:06.394 21:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:06.394 21:11:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:06.652 21:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:06.911 [ 00:12:06.911 { 00:12:06.911 "name": "BaseBdev2", 00:12:06.911 "aliases": [ 00:12:06.911 "9c75e13e-4225-11ef-aa83-81fbc7dfef58" 00:12:06.911 ], 00:12:06.911 "product_name": "Malloc disk", 00:12:06.911 "block_size": 512, 00:12:06.911 "num_blocks": 65536, 00:12:06.911 "uuid": "9c75e13e-4225-11ef-aa83-81fbc7dfef58", 00:12:06.911 "assigned_rate_limits": { 00:12:06.911 "rw_ios_per_sec": 0, 00:12:06.911 "rw_mbytes_per_sec": 0, 00:12:06.911 "r_mbytes_per_sec": 0, 00:12:06.911 "w_mbytes_per_sec": 0 00:12:06.911 }, 00:12:06.911 "claimed": false, 00:12:06.911 "zoned": false, 00:12:06.911 "supported_io_types": { 00:12:06.911 "read": true, 00:12:06.911 "write": true, 00:12:06.911 "unmap": true, 00:12:06.911 "flush": true, 00:12:06.911 "reset": true, 00:12:06.911 "nvme_admin": false, 00:12:06.911 "nvme_io": false, 00:12:06.911 "nvme_io_md": false, 00:12:06.911 "write_zeroes": true, 00:12:06.911 "zcopy": true, 00:12:06.911 "get_zone_info": false, 00:12:06.911 "zone_management": false, 00:12:06.911 "zone_append": false, 00:12:06.911 "compare": false, 00:12:06.911 "compare_and_write": false, 00:12:06.911 "abort": true, 00:12:06.911 "seek_hole": false, 00:12:06.911 "seek_data": false, 00:12:06.911 "copy": true, 00:12:06.911 "nvme_iov_md": false 00:12:06.911 }, 00:12:06.911 "memory_domains": [ 00:12:06.911 { 00:12:06.911 "dma_device_id": "system", 00:12:06.911 "dma_device_type": 1 00:12:06.911 }, 00:12:06.911 { 00:12:06.911 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:06.911 "dma_device_type": 2 00:12:06.911 } 00:12:06.911 ], 00:12:06.911 "driver_specific": {} 00:12:06.911 } 00:12:06.911 ] 00:12:06.911 21:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:06.911 21:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:06.911 21:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:06.911 21:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:07.170 BaseBdev3 00:12:07.170 21:11:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:12:07.170 21:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:07.170 21:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:07.170 21:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:07.170 21:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:07.170 21:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:07.170 21:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:07.429 21:11:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:07.687 [ 00:12:07.687 { 00:12:07.687 "name": "BaseBdev3", 00:12:07.687 "aliases": [ 00:12:07.687 "9ce80549-4225-11ef-aa83-81fbc7dfef58" 00:12:07.687 ], 00:12:07.687 "product_name": "Malloc disk", 00:12:07.687 "block_size": 512, 00:12:07.687 "num_blocks": 65536, 00:12:07.687 "uuid": "9ce80549-4225-11ef-aa83-81fbc7dfef58", 00:12:07.687 "assigned_rate_limits": { 00:12:07.687 "rw_ios_per_sec": 0, 00:12:07.687 "rw_mbytes_per_sec": 0, 00:12:07.687 "r_mbytes_per_sec": 0, 00:12:07.687 "w_mbytes_per_sec": 0 00:12:07.687 }, 00:12:07.687 "claimed": false, 00:12:07.687 "zoned": false, 00:12:07.687 "supported_io_types": { 00:12:07.687 "read": true, 00:12:07.687 "write": true, 00:12:07.687 "unmap": true, 00:12:07.687 "flush": true, 00:12:07.687 "reset": true, 00:12:07.688 "nvme_admin": false, 00:12:07.688 "nvme_io": false, 00:12:07.688 "nvme_io_md": false, 00:12:07.688 "write_zeroes": true, 00:12:07.688 "zcopy": true, 00:12:07.688 "get_zone_info": false, 00:12:07.688 "zone_management": false, 00:12:07.688 "zone_append": false, 00:12:07.688 "compare": false, 00:12:07.688 "compare_and_write": false, 00:12:07.688 "abort": true, 00:12:07.688 "seek_hole": false, 00:12:07.688 "seek_data": false, 00:12:07.688 "copy": true, 00:12:07.688 "nvme_iov_md": false 00:12:07.688 }, 00:12:07.688 "memory_domains": [ 00:12:07.688 { 00:12:07.688 "dma_device_id": "system", 00:12:07.688 "dma_device_type": 1 00:12:07.688 }, 00:12:07.688 { 00:12:07.688 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:07.688 "dma_device_type": 2 00:12:07.688 } 00:12:07.688 ], 00:12:07.688 "driver_specific": {} 00:12:07.688 } 00:12:07.688 ] 00:12:07.688 21:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:07.688 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:12:07.688 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:12:07.688 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n Existed_Raid 00:12:07.947 [2024-07-14 21:11:19.296963] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:07.947 [2024-07-14 21:11:19.297017] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:07.947 [2024-07-14 21:11:19.297041] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:07.947 [2024-07-14 21:11:19.297732] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:07.947 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:07.947 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:07.947 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:07.947 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:07.947 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:07.947 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:07.947 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:07.947 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:07.947 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:07.947 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:07.978 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:07.978 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:08.237 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:08.237 "name": "Existed_Raid", 00:12:08.237 "uuid": "9d5ac56a-4225-11ef-aa83-81fbc7dfef58", 00:12:08.237 "strip_size_kb": 0, 00:12:08.237 "state": "configuring", 00:12:08.237 "raid_level": "raid1", 00:12:08.237 "superblock": true, 00:12:08.237 "num_base_bdevs": 3, 00:12:08.237 "num_base_bdevs_discovered": 2, 00:12:08.237 "num_base_bdevs_operational": 3, 00:12:08.237 "base_bdevs_list": [ 00:12:08.237 { 00:12:08.237 "name": "BaseBdev1", 00:12:08.237 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:08.237 "is_configured": false, 00:12:08.237 "data_offset": 0, 00:12:08.237 "data_size": 0 00:12:08.237 }, 00:12:08.237 { 00:12:08.237 "name": "BaseBdev2", 00:12:08.237 "uuid": "9c75e13e-4225-11ef-aa83-81fbc7dfef58", 00:12:08.237 "is_configured": true, 00:12:08.237 "data_offset": 2048, 00:12:08.237 "data_size": 63488 00:12:08.237 }, 00:12:08.237 { 00:12:08.237 "name": "BaseBdev3", 00:12:08.237 "uuid": "9ce80549-4225-11ef-aa83-81fbc7dfef58", 00:12:08.237 "is_configured": true, 00:12:08.237 "data_offset": 2048, 00:12:08.237 "data_size": 63488 00:12:08.237 } 00:12:08.237 ] 00:12:08.237 }' 00:12:08.237 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:08.237 21:11:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:08.496 21:11:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:12:08.754 [2024-07-14 21:11:20.113015] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:12:08.754 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:08.754 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:08.754 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:08.754 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:08.754 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:08.754 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:08.754 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:08.754 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:08.754 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:08.754 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:08.754 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:08.754 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:09.012 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:09.012 "name": "Existed_Raid", 00:12:09.012 "uuid": "9d5ac56a-4225-11ef-aa83-81fbc7dfef58", 00:12:09.012 "strip_size_kb": 0, 00:12:09.012 "state": "configuring", 00:12:09.012 "raid_level": "raid1", 00:12:09.012 "superblock": true, 00:12:09.012 "num_base_bdevs": 3, 00:12:09.012 "num_base_bdevs_discovered": 1, 00:12:09.012 "num_base_bdevs_operational": 3, 00:12:09.012 "base_bdevs_list": [ 00:12:09.012 { 00:12:09.012 "name": "BaseBdev1", 00:12:09.012 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:09.012 "is_configured": false, 00:12:09.012 "data_offset": 0, 00:12:09.012 "data_size": 0 00:12:09.012 }, 00:12:09.012 { 00:12:09.012 "name": null, 00:12:09.012 "uuid": "9c75e13e-4225-11ef-aa83-81fbc7dfef58", 00:12:09.012 "is_configured": false, 00:12:09.012 "data_offset": 2048, 00:12:09.012 "data_size": 63488 00:12:09.012 }, 00:12:09.012 { 00:12:09.012 "name": "BaseBdev3", 00:12:09.012 "uuid": "9ce80549-4225-11ef-aa83-81fbc7dfef58", 00:12:09.012 "is_configured": true, 00:12:09.012 "data_offset": 2048, 00:12:09.012 "data_size": 63488 00:12:09.012 } 00:12:09.012 ] 00:12:09.012 }' 00:12:09.012 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:09.012 21:11:20 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:09.269 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:09.269 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:09.527 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:12:09.527 21:11:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:09.787 [2024-07-14 21:11:21.221239] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:09.787 BaseBdev1 00:12:09.787 21:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:12:09.787 21:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:09.787 21:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:09.787 21:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:09.787 21:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:09.787 21:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:09.787 21:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:10.045 21:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:10.303 [ 00:12:10.303 { 00:12:10.303 "name": "BaseBdev1", 00:12:10.303 "aliases": [ 00:12:10.303 "9e805e49-4225-11ef-aa83-81fbc7dfef58" 00:12:10.303 ], 00:12:10.303 "product_name": "Malloc disk", 00:12:10.303 "block_size": 512, 00:12:10.303 "num_blocks": 65536, 00:12:10.303 "uuid": "9e805e49-4225-11ef-aa83-81fbc7dfef58", 00:12:10.303 "assigned_rate_limits": { 00:12:10.303 "rw_ios_per_sec": 0, 00:12:10.303 "rw_mbytes_per_sec": 0, 00:12:10.303 "r_mbytes_per_sec": 0, 00:12:10.303 "w_mbytes_per_sec": 0 00:12:10.303 }, 00:12:10.303 "claimed": true, 00:12:10.303 "claim_type": "exclusive_write", 00:12:10.303 "zoned": false, 00:12:10.303 "supported_io_types": { 00:12:10.303 "read": true, 00:12:10.303 "write": true, 00:12:10.303 "unmap": true, 00:12:10.303 "flush": true, 00:12:10.303 "reset": true, 00:12:10.303 "nvme_admin": false, 00:12:10.304 "nvme_io": false, 00:12:10.304 "nvme_io_md": false, 00:12:10.304 "write_zeroes": true, 00:12:10.304 "zcopy": true, 00:12:10.304 "get_zone_info": false, 00:12:10.304 "zone_management": false, 00:12:10.304 "zone_append": false, 00:12:10.304 "compare": false, 00:12:10.304 "compare_and_write": false, 00:12:10.304 "abort": true, 00:12:10.304 "seek_hole": false, 00:12:10.304 "seek_data": false, 00:12:10.304 "copy": true, 00:12:10.304 "nvme_iov_md": false 00:12:10.304 }, 00:12:10.304 "memory_domains": [ 00:12:10.304 { 00:12:10.304 "dma_device_id": "system", 00:12:10.304 "dma_device_type": 1 00:12:10.304 }, 00:12:10.304 { 00:12:10.304 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:10.304 "dma_device_type": 2 00:12:10.304 } 00:12:10.304 ], 00:12:10.304 "driver_specific": {} 00:12:10.304 } 00:12:10.304 ] 00:12:10.304 21:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:10.304 21:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:10.304 21:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:10.304 21:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:10.304 21:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:10.304 21:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:10.304 21:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:10.304 21:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:10.304 21:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:10.304 21:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:10.304 21:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:10.304 21:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.304 21:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:10.562 21:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:10.562 "name": "Existed_Raid", 00:12:10.562 "uuid": "9d5ac56a-4225-11ef-aa83-81fbc7dfef58", 00:12:10.562 "strip_size_kb": 0, 00:12:10.562 "state": "configuring", 00:12:10.562 "raid_level": "raid1", 00:12:10.562 "superblock": true, 00:12:10.562 "num_base_bdevs": 3, 00:12:10.562 "num_base_bdevs_discovered": 2, 00:12:10.562 "num_base_bdevs_operational": 3, 00:12:10.562 "base_bdevs_list": [ 00:12:10.562 { 00:12:10.562 "name": "BaseBdev1", 00:12:10.562 "uuid": "9e805e49-4225-11ef-aa83-81fbc7dfef58", 00:12:10.562 "is_configured": true, 00:12:10.562 "data_offset": 2048, 00:12:10.563 "data_size": 63488 00:12:10.563 }, 00:12:10.563 { 00:12:10.563 "name": null, 00:12:10.563 "uuid": "9c75e13e-4225-11ef-aa83-81fbc7dfef58", 00:12:10.563 "is_configured": false, 00:12:10.563 "data_offset": 2048, 00:12:10.563 "data_size": 63488 00:12:10.563 }, 00:12:10.563 { 00:12:10.563 "name": "BaseBdev3", 00:12:10.563 "uuid": "9ce80549-4225-11ef-aa83-81fbc7dfef58", 00:12:10.563 "is_configured": true, 00:12:10.563 "data_offset": 2048, 00:12:10.563 "data_size": 63488 00:12:10.563 } 00:12:10.563 ] 00:12:10.563 }' 00:12:10.563 21:11:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:10.563 21:11:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:10.821 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:10.821 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:11.080 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:12:11.080 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:12:11.338 [2024-07-14 21:11:22.793166] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:12:11.339 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:11.339 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:11.339 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:11.339 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:11.339 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:11.339 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:11.339 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:11.339 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:11.339 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:11.339 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:11.339 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:11.339 21:11:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:11.597 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:11.597 "name": "Existed_Raid", 00:12:11.597 "uuid": "9d5ac56a-4225-11ef-aa83-81fbc7dfef58", 00:12:11.597 "strip_size_kb": 0, 00:12:11.597 "state": "configuring", 00:12:11.597 "raid_level": "raid1", 00:12:11.597 "superblock": true, 00:12:11.597 "num_base_bdevs": 3, 00:12:11.597 "num_base_bdevs_discovered": 1, 00:12:11.597 "num_base_bdevs_operational": 3, 00:12:11.597 "base_bdevs_list": [ 00:12:11.597 { 00:12:11.597 "name": "BaseBdev1", 00:12:11.597 "uuid": "9e805e49-4225-11ef-aa83-81fbc7dfef58", 00:12:11.597 "is_configured": true, 00:12:11.597 "data_offset": 2048, 00:12:11.597 "data_size": 63488 00:12:11.597 }, 00:12:11.597 { 00:12:11.597 "name": null, 00:12:11.597 "uuid": "9c75e13e-4225-11ef-aa83-81fbc7dfef58", 00:12:11.597 "is_configured": false, 00:12:11.597 "data_offset": 2048, 00:12:11.597 "data_size": 63488 00:12:11.597 }, 00:12:11.597 { 00:12:11.597 "name": null, 00:12:11.597 "uuid": "9ce80549-4225-11ef-aa83-81fbc7dfef58", 00:12:11.597 "is_configured": false, 00:12:11.597 "data_offset": 2048, 00:12:11.597 "data_size": 63488 00:12:11.597 } 00:12:11.597 ] 00:12:11.597 }' 00:12:11.597 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:11.597 21:11:23 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:11.855 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:11.855 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:12.114 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:12:12.114 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:12:12.372 [2024-07-14 21:11:23.901222] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:12.372 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:12.372 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:12.372 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:12.372 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:12.372 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:12.372 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:12.372 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:12.372 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:12.372 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:12.372 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:12.372 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:12.372 21:11:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:12.630 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:12.630 "name": "Existed_Raid", 00:12:12.630 "uuid": "9d5ac56a-4225-11ef-aa83-81fbc7dfef58", 00:12:12.630 "strip_size_kb": 0, 00:12:12.630 "state": "configuring", 00:12:12.630 "raid_level": "raid1", 00:12:12.630 "superblock": true, 00:12:12.630 "num_base_bdevs": 3, 00:12:12.630 "num_base_bdevs_discovered": 2, 00:12:12.630 "num_base_bdevs_operational": 3, 00:12:12.630 "base_bdevs_list": [ 00:12:12.630 { 00:12:12.630 "name": "BaseBdev1", 00:12:12.630 "uuid": "9e805e49-4225-11ef-aa83-81fbc7dfef58", 00:12:12.630 "is_configured": true, 00:12:12.630 "data_offset": 2048, 00:12:12.630 "data_size": 63488 00:12:12.630 }, 00:12:12.630 { 00:12:12.630 "name": null, 00:12:12.630 "uuid": "9c75e13e-4225-11ef-aa83-81fbc7dfef58", 00:12:12.630 "is_configured": false, 00:12:12.630 "data_offset": 2048, 00:12:12.630 "data_size": 63488 00:12:12.630 }, 00:12:12.630 { 00:12:12.630 "name": "BaseBdev3", 00:12:12.630 "uuid": "9ce80549-4225-11ef-aa83-81fbc7dfef58", 00:12:12.630 "is_configured": true, 00:12:12.630 "data_offset": 2048, 00:12:12.630 "data_size": 63488 00:12:12.630 } 00:12:12.630 ] 00:12:12.630 }' 00:12:12.630 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:12.630 21:11:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.198 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.198 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:12:13.198 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:12:13.198 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:13.457 [2024-07-14 21:11:24.965254] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:13.457 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:13.457 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:13.457 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:13.457 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:13.457 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:13.457 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:13.457 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:13.457 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:13.457 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:13.457 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:13.457 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.457 21:11:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:13.716 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:13.716 "name": "Existed_Raid", 00:12:13.716 "uuid": "9d5ac56a-4225-11ef-aa83-81fbc7dfef58", 00:12:13.716 "strip_size_kb": 0, 00:12:13.716 "state": "configuring", 00:12:13.716 "raid_level": "raid1", 00:12:13.716 "superblock": true, 00:12:13.716 "num_base_bdevs": 3, 00:12:13.716 "num_base_bdevs_discovered": 1, 00:12:13.716 "num_base_bdevs_operational": 3, 00:12:13.716 "base_bdevs_list": [ 00:12:13.716 { 00:12:13.716 "name": null, 00:12:13.716 "uuid": "9e805e49-4225-11ef-aa83-81fbc7dfef58", 00:12:13.716 "is_configured": false, 00:12:13.716 "data_offset": 2048, 00:12:13.716 "data_size": 63488 00:12:13.716 }, 00:12:13.716 { 00:12:13.716 "name": null, 00:12:13.716 "uuid": "9c75e13e-4225-11ef-aa83-81fbc7dfef58", 00:12:13.716 "is_configured": false, 00:12:13.716 "data_offset": 2048, 00:12:13.716 "data_size": 63488 00:12:13.716 }, 00:12:13.716 { 00:12:13.716 "name": "BaseBdev3", 00:12:13.716 "uuid": "9ce80549-4225-11ef-aa83-81fbc7dfef58", 00:12:13.716 "is_configured": true, 00:12:13.716 "data_offset": 2048, 00:12:13.716 "data_size": 63488 00:12:13.716 } 00:12:13.716 ] 00:12:13.716 }' 00:12:13.716 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:13.716 21:11:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:13.975 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:13.975 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:12:14.234 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:12:14.234 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:12:14.493 [2024-07-14 21:11:25.983282] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:14.494 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 3 00:12:14.494 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:14.494 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:14.494 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:14.494 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:14.494 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:14.494 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:14.494 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:14.494 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:14.494 21:11:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:14.494 21:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:14.494 21:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:14.753 21:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:14.753 "name": "Existed_Raid", 00:12:14.753 "uuid": "9d5ac56a-4225-11ef-aa83-81fbc7dfef58", 00:12:14.753 "strip_size_kb": 0, 00:12:14.753 "state": "configuring", 00:12:14.753 "raid_level": "raid1", 00:12:14.753 "superblock": true, 00:12:14.753 "num_base_bdevs": 3, 00:12:14.753 "num_base_bdevs_discovered": 2, 00:12:14.753 "num_base_bdevs_operational": 3, 00:12:14.753 "base_bdevs_list": [ 00:12:14.753 { 00:12:14.753 "name": null, 00:12:14.753 "uuid": "9e805e49-4225-11ef-aa83-81fbc7dfef58", 00:12:14.753 "is_configured": false, 00:12:14.753 "data_offset": 2048, 00:12:14.753 "data_size": 63488 00:12:14.753 }, 00:12:14.753 { 00:12:14.753 "name": "BaseBdev2", 00:12:14.753 "uuid": "9c75e13e-4225-11ef-aa83-81fbc7dfef58", 00:12:14.753 "is_configured": true, 00:12:14.753 "data_offset": 2048, 00:12:14.753 "data_size": 63488 00:12:14.753 }, 00:12:14.753 { 00:12:14.753 "name": "BaseBdev3", 00:12:14.753 "uuid": "9ce80549-4225-11ef-aa83-81fbc7dfef58", 00:12:14.753 "is_configured": true, 00:12:14.753 "data_offset": 2048, 00:12:14.753 "data_size": 63488 00:12:14.753 } 00:12:14.753 ] 00:12:14.753 }' 00:12:14.753 21:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:14.753 21:11:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:15.013 21:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.013 21:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:12:15.272 21:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:12:15.272 21:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:15.272 21:11:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:12:15.556 21:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 9e805e49-4225-11ef-aa83-81fbc7dfef58 00:12:15.814 [2024-07-14 21:11:27.223459] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:12:15.814 [2024-07-14 21:11:27.223517] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1a6568434f00 00:12:15.814 [2024-07-14 21:11:27.223522] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:15.814 [2024-07-14 21:11:27.223540] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1a6568497e20 00:12:15.814 [2024-07-14 21:11:27.223585] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1a6568434f00 00:12:15.814 [2024-07-14 21:11:27.223589] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x1a6568434f00 00:12:15.814 [2024-07-14 21:11:27.223608] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:15.814 NewBaseBdev 00:12:15.814 21:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:12:15.814 21:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:12:15.814 21:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:15.814 21:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:12:15.814 21:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:15.814 21:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:15.814 21:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:16.073 21:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:12:16.331 [ 00:12:16.331 { 00:12:16.331 "name": "NewBaseBdev", 00:12:16.331 "aliases": [ 00:12:16.331 "9e805e49-4225-11ef-aa83-81fbc7dfef58" 00:12:16.331 ], 00:12:16.331 "product_name": "Malloc disk", 00:12:16.331 "block_size": 512, 00:12:16.331 "num_blocks": 65536, 00:12:16.331 "uuid": "9e805e49-4225-11ef-aa83-81fbc7dfef58", 00:12:16.331 "assigned_rate_limits": { 00:12:16.331 "rw_ios_per_sec": 0, 00:12:16.331 "rw_mbytes_per_sec": 0, 00:12:16.331 "r_mbytes_per_sec": 0, 00:12:16.331 "w_mbytes_per_sec": 0 00:12:16.331 }, 00:12:16.331 "claimed": true, 00:12:16.331 "claim_type": "exclusive_write", 00:12:16.331 "zoned": false, 00:12:16.331 "supported_io_types": { 00:12:16.331 "read": true, 00:12:16.331 "write": true, 00:12:16.331 "unmap": true, 00:12:16.331 "flush": true, 00:12:16.331 "reset": true, 00:12:16.331 "nvme_admin": false, 00:12:16.331 "nvme_io": false, 00:12:16.331 "nvme_io_md": false, 00:12:16.331 "write_zeroes": true, 00:12:16.331 "zcopy": true, 00:12:16.331 "get_zone_info": false, 00:12:16.331 "zone_management": false, 00:12:16.331 "zone_append": false, 00:12:16.331 "compare": false, 00:12:16.331 "compare_and_write": false, 00:12:16.331 "abort": true, 00:12:16.332 "seek_hole": false, 00:12:16.332 "seek_data": false, 00:12:16.332 "copy": true, 00:12:16.332 "nvme_iov_md": false 00:12:16.332 }, 00:12:16.332 "memory_domains": [ 00:12:16.332 { 00:12:16.332 "dma_device_id": "system", 00:12:16.332 "dma_device_type": 1 00:12:16.332 }, 00:12:16.332 { 00:12:16.332 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:16.332 "dma_device_type": 2 00:12:16.332 } 00:12:16.332 ], 00:12:16.332 "driver_specific": {} 00:12:16.332 } 00:12:16.332 ] 00:12:16.332 21:11:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:12:16.332 21:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:12:16.332 21:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:16.332 21:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:16.332 21:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:16.332 21:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:16.332 21:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:16.332 21:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:16.332 21:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:16.332 21:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:16.332 21:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:16.332 21:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:16.332 21:11:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:16.591 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:16.591 "name": "Existed_Raid", 00:12:16.591 "uuid": "9d5ac56a-4225-11ef-aa83-81fbc7dfef58", 00:12:16.591 "strip_size_kb": 0, 00:12:16.591 "state": "online", 00:12:16.591 "raid_level": "raid1", 00:12:16.591 "superblock": true, 00:12:16.591 "num_base_bdevs": 3, 00:12:16.591 "num_base_bdevs_discovered": 3, 00:12:16.591 "num_base_bdevs_operational": 3, 00:12:16.591 "base_bdevs_list": [ 00:12:16.591 { 00:12:16.591 "name": "NewBaseBdev", 00:12:16.591 "uuid": "9e805e49-4225-11ef-aa83-81fbc7dfef58", 00:12:16.591 "is_configured": true, 00:12:16.591 "data_offset": 2048, 00:12:16.591 "data_size": 63488 00:12:16.591 }, 00:12:16.591 { 00:12:16.591 "name": "BaseBdev2", 00:12:16.591 "uuid": "9c75e13e-4225-11ef-aa83-81fbc7dfef58", 00:12:16.591 "is_configured": true, 00:12:16.591 "data_offset": 2048, 00:12:16.591 "data_size": 63488 00:12:16.591 }, 00:12:16.591 { 00:12:16.591 "name": "BaseBdev3", 00:12:16.591 "uuid": "9ce80549-4225-11ef-aa83-81fbc7dfef58", 00:12:16.591 "is_configured": true, 00:12:16.591 "data_offset": 2048, 00:12:16.591 "data_size": 63488 00:12:16.591 } 00:12:16.591 ] 00:12:16.591 }' 00:12:16.591 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:16.591 21:11:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:16.849 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:12:16.849 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:16.849 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:16.849 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:16.849 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:16.849 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:12:16.849 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:16.849 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:17.108 [2024-07-14 21:11:28.531386] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:17.108 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:17.108 "name": "Existed_Raid", 00:12:17.108 "aliases": [ 00:12:17.108 "9d5ac56a-4225-11ef-aa83-81fbc7dfef58" 00:12:17.108 ], 00:12:17.108 "product_name": "Raid Volume", 00:12:17.108 "block_size": 512, 00:12:17.108 "num_blocks": 63488, 00:12:17.108 "uuid": "9d5ac56a-4225-11ef-aa83-81fbc7dfef58", 00:12:17.108 "assigned_rate_limits": { 00:12:17.108 "rw_ios_per_sec": 0, 00:12:17.108 "rw_mbytes_per_sec": 0, 00:12:17.108 "r_mbytes_per_sec": 0, 00:12:17.108 "w_mbytes_per_sec": 0 00:12:17.108 }, 00:12:17.108 "claimed": false, 00:12:17.108 "zoned": false, 00:12:17.108 "supported_io_types": { 00:12:17.108 "read": true, 00:12:17.108 "write": true, 00:12:17.108 "unmap": false, 00:12:17.108 "flush": false, 00:12:17.108 "reset": true, 00:12:17.108 "nvme_admin": false, 00:12:17.108 "nvme_io": false, 00:12:17.108 "nvme_io_md": false, 00:12:17.108 "write_zeroes": true, 00:12:17.108 "zcopy": false, 00:12:17.108 "get_zone_info": false, 00:12:17.108 "zone_management": false, 00:12:17.108 "zone_append": false, 00:12:17.108 "compare": false, 00:12:17.108 "compare_and_write": false, 00:12:17.108 "abort": false, 00:12:17.108 "seek_hole": false, 00:12:17.108 "seek_data": false, 00:12:17.108 "copy": false, 00:12:17.108 "nvme_iov_md": false 00:12:17.108 }, 00:12:17.108 "memory_domains": [ 00:12:17.108 { 00:12:17.108 "dma_device_id": "system", 00:12:17.108 "dma_device_type": 1 00:12:17.108 }, 00:12:17.108 { 00:12:17.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.108 "dma_device_type": 2 00:12:17.108 }, 00:12:17.108 { 00:12:17.108 "dma_device_id": "system", 00:12:17.108 "dma_device_type": 1 00:12:17.108 }, 00:12:17.108 { 00:12:17.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.108 "dma_device_type": 2 00:12:17.108 }, 00:12:17.108 { 00:12:17.108 "dma_device_id": "system", 00:12:17.108 "dma_device_type": 1 00:12:17.108 }, 00:12:17.108 { 00:12:17.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.108 "dma_device_type": 2 00:12:17.108 } 00:12:17.108 ], 00:12:17.108 "driver_specific": { 00:12:17.108 "raid": { 00:12:17.108 "uuid": "9d5ac56a-4225-11ef-aa83-81fbc7dfef58", 00:12:17.108 "strip_size_kb": 0, 00:12:17.108 "state": "online", 00:12:17.108 "raid_level": "raid1", 00:12:17.108 "superblock": true, 00:12:17.108 "num_base_bdevs": 3, 00:12:17.108 "num_base_bdevs_discovered": 3, 00:12:17.108 "num_base_bdevs_operational": 3, 00:12:17.108 "base_bdevs_list": [ 00:12:17.108 { 00:12:17.108 "name": "NewBaseBdev", 00:12:17.108 "uuid": "9e805e49-4225-11ef-aa83-81fbc7dfef58", 00:12:17.108 "is_configured": true, 00:12:17.108 "data_offset": 2048, 00:12:17.108 "data_size": 63488 00:12:17.108 }, 00:12:17.108 { 00:12:17.108 "name": "BaseBdev2", 00:12:17.108 "uuid": "9c75e13e-4225-11ef-aa83-81fbc7dfef58", 00:12:17.108 "is_configured": true, 00:12:17.108 "data_offset": 2048, 00:12:17.108 "data_size": 63488 00:12:17.108 }, 00:12:17.108 { 00:12:17.108 "name": "BaseBdev3", 00:12:17.108 "uuid": "9ce80549-4225-11ef-aa83-81fbc7dfef58", 00:12:17.108 "is_configured": true, 00:12:17.108 "data_offset": 2048, 00:12:17.108 "data_size": 63488 00:12:17.108 } 00:12:17.108 ] 00:12:17.108 } 00:12:17.108 } 00:12:17.108 }' 00:12:17.108 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:17.108 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:12:17.108 BaseBdev2 00:12:17.108 BaseBdev3' 00:12:17.108 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:17.108 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:12:17.108 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:17.367 "name": "NewBaseBdev", 00:12:17.367 "aliases": [ 00:12:17.367 "9e805e49-4225-11ef-aa83-81fbc7dfef58" 00:12:17.367 ], 00:12:17.367 "product_name": "Malloc disk", 00:12:17.367 "block_size": 512, 00:12:17.367 "num_blocks": 65536, 00:12:17.367 "uuid": "9e805e49-4225-11ef-aa83-81fbc7dfef58", 00:12:17.367 "assigned_rate_limits": { 00:12:17.367 "rw_ios_per_sec": 0, 00:12:17.367 "rw_mbytes_per_sec": 0, 00:12:17.367 "r_mbytes_per_sec": 0, 00:12:17.367 "w_mbytes_per_sec": 0 00:12:17.367 }, 00:12:17.367 "claimed": true, 00:12:17.367 "claim_type": "exclusive_write", 00:12:17.367 "zoned": false, 00:12:17.367 "supported_io_types": { 00:12:17.367 "read": true, 00:12:17.367 "write": true, 00:12:17.367 "unmap": true, 00:12:17.367 "flush": true, 00:12:17.367 "reset": true, 00:12:17.367 "nvme_admin": false, 00:12:17.367 "nvme_io": false, 00:12:17.367 "nvme_io_md": false, 00:12:17.367 "write_zeroes": true, 00:12:17.367 "zcopy": true, 00:12:17.367 "get_zone_info": false, 00:12:17.367 "zone_management": false, 00:12:17.367 "zone_append": false, 00:12:17.367 "compare": false, 00:12:17.367 "compare_and_write": false, 00:12:17.367 "abort": true, 00:12:17.367 "seek_hole": false, 00:12:17.367 "seek_data": false, 00:12:17.367 "copy": true, 00:12:17.367 "nvme_iov_md": false 00:12:17.367 }, 00:12:17.367 "memory_domains": [ 00:12:17.367 { 00:12:17.367 "dma_device_id": "system", 00:12:17.367 "dma_device_type": 1 00:12:17.367 }, 00:12:17.367 { 00:12:17.367 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.367 "dma_device_type": 2 00:12:17.367 } 00:12:17.367 ], 00:12:17.367 "driver_specific": {} 00:12:17.367 }' 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:17.367 21:11:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:17.935 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:17.935 "name": "BaseBdev2", 00:12:17.935 "aliases": [ 00:12:17.935 "9c75e13e-4225-11ef-aa83-81fbc7dfef58" 00:12:17.935 ], 00:12:17.935 "product_name": "Malloc disk", 00:12:17.935 "block_size": 512, 00:12:17.935 "num_blocks": 65536, 00:12:17.935 "uuid": "9c75e13e-4225-11ef-aa83-81fbc7dfef58", 00:12:17.935 "assigned_rate_limits": { 00:12:17.935 "rw_ios_per_sec": 0, 00:12:17.935 "rw_mbytes_per_sec": 0, 00:12:17.935 "r_mbytes_per_sec": 0, 00:12:17.935 "w_mbytes_per_sec": 0 00:12:17.935 }, 00:12:17.935 "claimed": true, 00:12:17.935 "claim_type": "exclusive_write", 00:12:17.935 "zoned": false, 00:12:17.935 "supported_io_types": { 00:12:17.935 "read": true, 00:12:17.935 "write": true, 00:12:17.935 "unmap": true, 00:12:17.935 "flush": true, 00:12:17.935 "reset": true, 00:12:17.935 "nvme_admin": false, 00:12:17.935 "nvme_io": false, 00:12:17.935 "nvme_io_md": false, 00:12:17.935 "write_zeroes": true, 00:12:17.935 "zcopy": true, 00:12:17.935 "get_zone_info": false, 00:12:17.935 "zone_management": false, 00:12:17.935 "zone_append": false, 00:12:17.935 "compare": false, 00:12:17.935 "compare_and_write": false, 00:12:17.935 "abort": true, 00:12:17.935 "seek_hole": false, 00:12:17.935 "seek_data": false, 00:12:17.935 "copy": true, 00:12:17.936 "nvme_iov_md": false 00:12:17.936 }, 00:12:17.936 "memory_domains": [ 00:12:17.936 { 00:12:17.936 "dma_device_id": "system", 00:12:17.936 "dma_device_type": 1 00:12:17.936 }, 00:12:17.936 { 00:12:17.936 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:17.936 "dma_device_type": 2 00:12:17.936 } 00:12:17.936 ], 00:12:17.936 "driver_specific": {} 00:12:17.936 }' 00:12:17.936 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.936 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:17.936 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:17.936 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.936 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:17.936 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:17.936 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.936 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:17.936 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:17.936 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.936 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:17.936 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:17.936 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:17.936 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:17.936 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:18.194 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:18.194 "name": "BaseBdev3", 00:12:18.194 "aliases": [ 00:12:18.194 "9ce80549-4225-11ef-aa83-81fbc7dfef58" 00:12:18.194 ], 00:12:18.194 "product_name": "Malloc disk", 00:12:18.194 "block_size": 512, 00:12:18.194 "num_blocks": 65536, 00:12:18.194 "uuid": "9ce80549-4225-11ef-aa83-81fbc7dfef58", 00:12:18.194 "assigned_rate_limits": { 00:12:18.195 "rw_ios_per_sec": 0, 00:12:18.195 "rw_mbytes_per_sec": 0, 00:12:18.195 "r_mbytes_per_sec": 0, 00:12:18.195 "w_mbytes_per_sec": 0 00:12:18.195 }, 00:12:18.195 "claimed": true, 00:12:18.195 "claim_type": "exclusive_write", 00:12:18.195 "zoned": false, 00:12:18.195 "supported_io_types": { 00:12:18.195 "read": true, 00:12:18.195 "write": true, 00:12:18.195 "unmap": true, 00:12:18.195 "flush": true, 00:12:18.195 "reset": true, 00:12:18.195 "nvme_admin": false, 00:12:18.195 "nvme_io": false, 00:12:18.195 "nvme_io_md": false, 00:12:18.195 "write_zeroes": true, 00:12:18.195 "zcopy": true, 00:12:18.195 "get_zone_info": false, 00:12:18.195 "zone_management": false, 00:12:18.195 "zone_append": false, 00:12:18.195 "compare": false, 00:12:18.195 "compare_and_write": false, 00:12:18.195 "abort": true, 00:12:18.195 "seek_hole": false, 00:12:18.195 "seek_data": false, 00:12:18.195 "copy": true, 00:12:18.195 "nvme_iov_md": false 00:12:18.195 }, 00:12:18.195 "memory_domains": [ 00:12:18.195 { 00:12:18.195 "dma_device_id": "system", 00:12:18.195 "dma_device_type": 1 00:12:18.195 }, 00:12:18.195 { 00:12:18.195 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:18.195 "dma_device_type": 2 00:12:18.195 } 00:12:18.195 ], 00:12:18.195 "driver_specific": {} 00:12:18.195 }' 00:12:18.195 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:18.195 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:18.195 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:18.195 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:18.195 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:18.195 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:18.195 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:18.195 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:18.195 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:18.195 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:18.195 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:18.195 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:18.195 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:18.454 [2024-07-14 21:11:29.911365] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:18.454 [2024-07-14 21:11:29.911379] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:18.454 [2024-07-14 21:11:29.911408] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:18.454 [2024-07-14 21:11:29.911499] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:18.454 [2024-07-14 21:11:29.911503] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1a6568434f00 name Existed_Raid, state offline 00:12:18.454 21:11:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 56764 00:12:18.454 21:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 56764 ']' 00:12:18.454 21:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 56764 00:12:18.454 21:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:12:18.454 21:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:18.454 21:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 56764 00:12:18.454 21:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:12:18.454 21:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:12:18.454 killing process with pid 56764 00:12:18.454 21:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:12:18.454 21:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 56764' 00:12:18.454 21:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 56764 00:12:18.454 [2024-07-14 21:11:29.938674] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:18.454 21:11:29 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 56764 00:12:18.454 [2024-07-14 21:11:29.963904] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:18.713 21:11:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:12:18.713 00:12:18.713 real 0m23.258s 00:12:18.713 user 0m42.195s 00:12:18.713 sys 0m3.441s 00:12:18.713 21:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:18.713 21:11:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:12:18.713 ************************************ 00:12:18.713 END TEST raid_state_function_test_sb 00:12:18.713 ************************************ 00:12:18.971 21:11:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:18.971 21:11:30 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 3 00:12:18.971 21:11:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:12:18.971 21:11:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:18.971 21:11:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:18.971 ************************************ 00:12:18.971 START TEST raid_superblock_test 00:12:18.971 ************************************ 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 3 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=3 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=57488 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 57488 /var/tmp/spdk-raid.sock 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 57488 ']' 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:18.971 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:18.971 21:11:30 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:18.971 [2024-07-14 21:11:30.284365] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:18.971 [2024-07-14 21:11:30.284646] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:19.539 EAL: TSC is not safe to use in SMP mode 00:12:19.539 EAL: TSC is not invariant 00:12:19.539 [2024-07-14 21:11:30.829898] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:19.539 [2024-07-14 21:11:30.933086] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:19.539 [2024-07-14 21:11:30.935739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:19.539 [2024-07-14 21:11:30.936730] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.539 [2024-07-14 21:11:30.936748] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:19.798 21:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:19.798 21:11:31 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:12:19.798 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:12:19.798 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:19.798 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:12:19.798 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:12:19.798 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:12:19.798 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:19.799 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:19.799 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:19.799 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:12:20.057 malloc1 00:12:20.057 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:20.316 [2024-07-14 21:11:31.798674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:20.316 [2024-07-14 21:11:31.798734] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.316 [2024-07-14 21:11:31.798772] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105a8434780 00:12:20.316 [2024-07-14 21:11:31.798790] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.316 [2024-07-14 21:11:31.799869] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.316 [2024-07-14 21:11:31.799894] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:20.316 pt1 00:12:20.316 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:20.316 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:20.316 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:12:20.316 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:12:20.316 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:12:20.316 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:20.316 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:20.316 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:20.316 21:11:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:12:20.576 malloc2 00:12:20.576 21:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:20.835 [2024-07-14 21:11:32.278716] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:20.835 [2024-07-14 21:11:32.278802] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:20.835 [2024-07-14 21:11:32.278830] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105a8434c80 00:12:20.835 [2024-07-14 21:11:32.278837] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:20.835 [2024-07-14 21:11:32.279511] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:20.835 [2024-07-14 21:11:32.279537] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:20.835 pt2 00:12:20.835 21:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:20.835 21:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:20.835 21:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:12:20.835 21:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:12:20.835 21:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:12:20.835 21:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:12:20.835 21:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:12:20.835 21:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:12:20.835 21:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:12:21.094 malloc3 00:12:21.094 21:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:21.353 [2024-07-14 21:11:32.746731] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:21.353 [2024-07-14 21:11:32.746816] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:21.353 [2024-07-14 21:11:32.746842] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105a8435180 00:12:21.353 [2024-07-14 21:11:32.746849] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:21.353 [2024-07-14 21:11:32.747544] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:21.353 [2024-07-14 21:11:32.747569] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:21.353 pt3 00:12:21.353 21:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:12:21.353 21:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:12:21.354 21:11:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3' -n raid_bdev1 -s 00:12:21.613 [2024-07-14 21:11:33.006756] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:21.613 [2024-07-14 21:11:33.007325] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:21.613 [2024-07-14 21:11:33.007347] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:21.613 [2024-07-14 21:11:33.007395] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x105a8435400 00:12:21.613 [2024-07-14 21:11:33.007400] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:21.613 [2024-07-14 21:11:33.007443] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x105a8497e20 00:12:21.613 [2024-07-14 21:11:33.007548] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x105a8435400 00:12:21.613 [2024-07-14 21:11:33.007552] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x105a8435400 00:12:21.613 [2024-07-14 21:11:33.007577] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:21.613 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:21.613 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:21.613 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:21.613 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:21.613 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:21.613 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:21.613 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:21.613 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:21.613 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:21.613 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:21.613 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:21.613 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:21.871 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:21.871 "name": "raid_bdev1", 00:12:21.871 "uuid": "a586b7bd-4225-11ef-aa83-81fbc7dfef58", 00:12:21.871 "strip_size_kb": 0, 00:12:21.871 "state": "online", 00:12:21.871 "raid_level": "raid1", 00:12:21.871 "superblock": true, 00:12:21.871 "num_base_bdevs": 3, 00:12:21.871 "num_base_bdevs_discovered": 3, 00:12:21.871 "num_base_bdevs_operational": 3, 00:12:21.871 "base_bdevs_list": [ 00:12:21.871 { 00:12:21.871 "name": "pt1", 00:12:21.871 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:21.871 "is_configured": true, 00:12:21.871 "data_offset": 2048, 00:12:21.871 "data_size": 63488 00:12:21.871 }, 00:12:21.871 { 00:12:21.871 "name": "pt2", 00:12:21.871 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:21.871 "is_configured": true, 00:12:21.871 "data_offset": 2048, 00:12:21.871 "data_size": 63488 00:12:21.871 }, 00:12:21.871 { 00:12:21.871 "name": "pt3", 00:12:21.871 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:21.871 "is_configured": true, 00:12:21.871 "data_offset": 2048, 00:12:21.871 "data_size": 63488 00:12:21.871 } 00:12:21.871 ] 00:12:21.871 }' 00:12:21.871 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:21.871 21:11:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:22.129 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:12:22.129 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:22.129 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:22.129 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:22.129 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:22.129 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:22.129 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:22.129 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:22.388 [2024-07-14 21:11:33.794839] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:22.388 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:22.388 "name": "raid_bdev1", 00:12:22.388 "aliases": [ 00:12:22.388 "a586b7bd-4225-11ef-aa83-81fbc7dfef58" 00:12:22.388 ], 00:12:22.388 "product_name": "Raid Volume", 00:12:22.388 "block_size": 512, 00:12:22.388 "num_blocks": 63488, 00:12:22.388 "uuid": "a586b7bd-4225-11ef-aa83-81fbc7dfef58", 00:12:22.388 "assigned_rate_limits": { 00:12:22.388 "rw_ios_per_sec": 0, 00:12:22.388 "rw_mbytes_per_sec": 0, 00:12:22.388 "r_mbytes_per_sec": 0, 00:12:22.388 "w_mbytes_per_sec": 0 00:12:22.388 }, 00:12:22.388 "claimed": false, 00:12:22.388 "zoned": false, 00:12:22.388 "supported_io_types": { 00:12:22.388 "read": true, 00:12:22.388 "write": true, 00:12:22.388 "unmap": false, 00:12:22.388 "flush": false, 00:12:22.388 "reset": true, 00:12:22.388 "nvme_admin": false, 00:12:22.388 "nvme_io": false, 00:12:22.388 "nvme_io_md": false, 00:12:22.388 "write_zeroes": true, 00:12:22.388 "zcopy": false, 00:12:22.388 "get_zone_info": false, 00:12:22.388 "zone_management": false, 00:12:22.388 "zone_append": false, 00:12:22.388 "compare": false, 00:12:22.388 "compare_and_write": false, 00:12:22.388 "abort": false, 00:12:22.388 "seek_hole": false, 00:12:22.388 "seek_data": false, 00:12:22.388 "copy": false, 00:12:22.388 "nvme_iov_md": false 00:12:22.388 }, 00:12:22.388 "memory_domains": [ 00:12:22.388 { 00:12:22.388 "dma_device_id": "system", 00:12:22.388 "dma_device_type": 1 00:12:22.388 }, 00:12:22.388 { 00:12:22.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.388 "dma_device_type": 2 00:12:22.388 }, 00:12:22.388 { 00:12:22.388 "dma_device_id": "system", 00:12:22.388 "dma_device_type": 1 00:12:22.388 }, 00:12:22.388 { 00:12:22.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.388 "dma_device_type": 2 00:12:22.388 }, 00:12:22.388 { 00:12:22.388 "dma_device_id": "system", 00:12:22.388 "dma_device_type": 1 00:12:22.388 }, 00:12:22.388 { 00:12:22.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.388 "dma_device_type": 2 00:12:22.388 } 00:12:22.388 ], 00:12:22.388 "driver_specific": { 00:12:22.388 "raid": { 00:12:22.388 "uuid": "a586b7bd-4225-11ef-aa83-81fbc7dfef58", 00:12:22.388 "strip_size_kb": 0, 00:12:22.388 "state": "online", 00:12:22.388 "raid_level": "raid1", 00:12:22.388 "superblock": true, 00:12:22.388 "num_base_bdevs": 3, 00:12:22.388 "num_base_bdevs_discovered": 3, 00:12:22.388 "num_base_bdevs_operational": 3, 00:12:22.388 "base_bdevs_list": [ 00:12:22.388 { 00:12:22.388 "name": "pt1", 00:12:22.388 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.388 "is_configured": true, 00:12:22.388 "data_offset": 2048, 00:12:22.388 "data_size": 63488 00:12:22.388 }, 00:12:22.388 { 00:12:22.388 "name": "pt2", 00:12:22.388 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.388 "is_configured": true, 00:12:22.388 "data_offset": 2048, 00:12:22.388 "data_size": 63488 00:12:22.388 }, 00:12:22.388 { 00:12:22.388 "name": "pt3", 00:12:22.388 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:22.388 "is_configured": true, 00:12:22.388 "data_offset": 2048, 00:12:22.389 "data_size": 63488 00:12:22.389 } 00:12:22.389 ] 00:12:22.389 } 00:12:22.389 } 00:12:22.389 }' 00:12:22.389 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:22.389 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:22.389 pt2 00:12:22.389 pt3' 00:12:22.389 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:22.389 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:22.389 21:11:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:22.648 "name": "pt1", 00:12:22.648 "aliases": [ 00:12:22.648 "00000000-0000-0000-0000-000000000001" 00:12:22.648 ], 00:12:22.648 "product_name": "passthru", 00:12:22.648 "block_size": 512, 00:12:22.648 "num_blocks": 65536, 00:12:22.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:22.648 "assigned_rate_limits": { 00:12:22.648 "rw_ios_per_sec": 0, 00:12:22.648 "rw_mbytes_per_sec": 0, 00:12:22.648 "r_mbytes_per_sec": 0, 00:12:22.648 "w_mbytes_per_sec": 0 00:12:22.648 }, 00:12:22.648 "claimed": true, 00:12:22.648 "claim_type": "exclusive_write", 00:12:22.648 "zoned": false, 00:12:22.648 "supported_io_types": { 00:12:22.648 "read": true, 00:12:22.648 "write": true, 00:12:22.648 "unmap": true, 00:12:22.648 "flush": true, 00:12:22.648 "reset": true, 00:12:22.648 "nvme_admin": false, 00:12:22.648 "nvme_io": false, 00:12:22.648 "nvme_io_md": false, 00:12:22.648 "write_zeroes": true, 00:12:22.648 "zcopy": true, 00:12:22.648 "get_zone_info": false, 00:12:22.648 "zone_management": false, 00:12:22.648 "zone_append": false, 00:12:22.648 "compare": false, 00:12:22.648 "compare_and_write": false, 00:12:22.648 "abort": true, 00:12:22.648 "seek_hole": false, 00:12:22.648 "seek_data": false, 00:12:22.648 "copy": true, 00:12:22.648 "nvme_iov_md": false 00:12:22.648 }, 00:12:22.648 "memory_domains": [ 00:12:22.648 { 00:12:22.648 "dma_device_id": "system", 00:12:22.648 "dma_device_type": 1 00:12:22.648 }, 00:12:22.648 { 00:12:22.648 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.648 "dma_device_type": 2 00:12:22.648 } 00:12:22.648 ], 00:12:22.648 "driver_specific": { 00:12:22.648 "passthru": { 00:12:22.648 "name": "pt1", 00:12:22.648 "base_bdev_name": "malloc1" 00:12:22.648 } 00:12:22.648 } 00:12:22.648 }' 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:22.648 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:22.907 "name": "pt2", 00:12:22.907 "aliases": [ 00:12:22.907 "00000000-0000-0000-0000-000000000002" 00:12:22.907 ], 00:12:22.907 "product_name": "passthru", 00:12:22.907 "block_size": 512, 00:12:22.907 "num_blocks": 65536, 00:12:22.907 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:22.907 "assigned_rate_limits": { 00:12:22.907 "rw_ios_per_sec": 0, 00:12:22.907 "rw_mbytes_per_sec": 0, 00:12:22.907 "r_mbytes_per_sec": 0, 00:12:22.907 "w_mbytes_per_sec": 0 00:12:22.907 }, 00:12:22.907 "claimed": true, 00:12:22.907 "claim_type": "exclusive_write", 00:12:22.907 "zoned": false, 00:12:22.907 "supported_io_types": { 00:12:22.907 "read": true, 00:12:22.907 "write": true, 00:12:22.907 "unmap": true, 00:12:22.907 "flush": true, 00:12:22.907 "reset": true, 00:12:22.907 "nvme_admin": false, 00:12:22.907 "nvme_io": false, 00:12:22.907 "nvme_io_md": false, 00:12:22.907 "write_zeroes": true, 00:12:22.907 "zcopy": true, 00:12:22.907 "get_zone_info": false, 00:12:22.907 "zone_management": false, 00:12:22.907 "zone_append": false, 00:12:22.907 "compare": false, 00:12:22.907 "compare_and_write": false, 00:12:22.907 "abort": true, 00:12:22.907 "seek_hole": false, 00:12:22.907 "seek_data": false, 00:12:22.907 "copy": true, 00:12:22.907 "nvme_iov_md": false 00:12:22.907 }, 00:12:22.907 "memory_domains": [ 00:12:22.907 { 00:12:22.907 "dma_device_id": "system", 00:12:22.907 "dma_device_type": 1 00:12:22.907 }, 00:12:22.907 { 00:12:22.907 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:22.907 "dma_device_type": 2 00:12:22.907 } 00:12:22.907 ], 00:12:22.907 "driver_specific": { 00:12:22.907 "passthru": { 00:12:22.907 "name": "pt2", 00:12:22.907 "base_bdev_name": "malloc2" 00:12:22.907 } 00:12:22.907 } 00:12:22.907 }' 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:22.907 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:23.474 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:23.474 "name": "pt3", 00:12:23.474 "aliases": [ 00:12:23.474 "00000000-0000-0000-0000-000000000003" 00:12:23.474 ], 00:12:23.475 "product_name": "passthru", 00:12:23.475 "block_size": 512, 00:12:23.475 "num_blocks": 65536, 00:12:23.475 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:23.475 "assigned_rate_limits": { 00:12:23.475 "rw_ios_per_sec": 0, 00:12:23.475 "rw_mbytes_per_sec": 0, 00:12:23.475 "r_mbytes_per_sec": 0, 00:12:23.475 "w_mbytes_per_sec": 0 00:12:23.475 }, 00:12:23.475 "claimed": true, 00:12:23.475 "claim_type": "exclusive_write", 00:12:23.475 "zoned": false, 00:12:23.475 "supported_io_types": { 00:12:23.475 "read": true, 00:12:23.475 "write": true, 00:12:23.475 "unmap": true, 00:12:23.475 "flush": true, 00:12:23.475 "reset": true, 00:12:23.475 "nvme_admin": false, 00:12:23.475 "nvme_io": false, 00:12:23.475 "nvme_io_md": false, 00:12:23.475 "write_zeroes": true, 00:12:23.475 "zcopy": true, 00:12:23.475 "get_zone_info": false, 00:12:23.475 "zone_management": false, 00:12:23.475 "zone_append": false, 00:12:23.475 "compare": false, 00:12:23.475 "compare_and_write": false, 00:12:23.475 "abort": true, 00:12:23.475 "seek_hole": false, 00:12:23.475 "seek_data": false, 00:12:23.475 "copy": true, 00:12:23.475 "nvme_iov_md": false 00:12:23.475 }, 00:12:23.475 "memory_domains": [ 00:12:23.475 { 00:12:23.475 "dma_device_id": "system", 00:12:23.475 "dma_device_type": 1 00:12:23.475 }, 00:12:23.475 { 00:12:23.475 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:23.475 "dma_device_type": 2 00:12:23.475 } 00:12:23.475 ], 00:12:23.475 "driver_specific": { 00:12:23.475 "passthru": { 00:12:23.475 "name": "pt3", 00:12:23.475 "base_bdev_name": "malloc3" 00:12:23.475 } 00:12:23.475 } 00:12:23.475 }' 00:12:23.475 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:23.475 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:23.475 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:23.475 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:23.475 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:23.475 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:23.475 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:23.475 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:23.475 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:23.475 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:23.475 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:23.475 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:23.475 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:23.475 21:11:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:12:23.733 [2024-07-14 21:11:35.034877] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:23.733 21:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=a586b7bd-4225-11ef-aa83-81fbc7dfef58 00:12:23.733 21:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z a586b7bd-4225-11ef-aa83-81fbc7dfef58 ']' 00:12:23.733 21:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:23.991 [2024-07-14 21:11:35.314828] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:23.991 [2024-07-14 21:11:35.314842] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:23.991 [2024-07-14 21:11:35.314878] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:23.991 [2024-07-14 21:11:35.314894] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:23.991 [2024-07-14 21:11:35.314898] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x105a8435400 name raid_bdev1, state offline 00:12:23.991 21:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:23.991 21:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:12:24.249 21:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:12:24.249 21:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:12:24.249 21:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:24.249 21:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:24.543 21:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:24.544 21:11:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:24.544 21:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:12:24.544 21:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:24.817 21:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:12:24.817 21:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:12:25.076 21:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:12:25.076 21:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:25.076 21:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:12:25.076 21:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:25.076 21:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:25.076 21:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.076 21:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:25.076 21:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.076 21:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:25.076 21:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:12:25.076 21:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:12:25.076 21:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:12:25.076 21:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3' -n raid_bdev1 00:12:25.334 [2024-07-14 21:11:36.706908] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:12:25.334 [2024-07-14 21:11:36.707536] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:12:25.334 [2024-07-14 21:11:36.707555] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:12:25.334 [2024-07-14 21:11:36.707569] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:12:25.334 [2024-07-14 21:11:36.707615] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:12:25.335 [2024-07-14 21:11:36.707627] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:12:25.335 [2024-07-14 21:11:36.707635] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:25.335 [2024-07-14 21:11:36.707639] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x105a8435180 name raid_bdev1, state configuring 00:12:25.335 request: 00:12:25.335 { 00:12:25.335 "name": "raid_bdev1", 00:12:25.335 "raid_level": "raid1", 00:12:25.335 "base_bdevs": [ 00:12:25.335 "malloc1", 00:12:25.335 "malloc2", 00:12:25.335 "malloc3" 00:12:25.335 ], 00:12:25.335 "superblock": false, 00:12:25.335 "method": "bdev_raid_create", 00:12:25.335 "req_id": 1 00:12:25.335 } 00:12:25.335 Got JSON-RPC error response 00:12:25.335 response: 00:12:25.335 { 00:12:25.335 "code": -17, 00:12:25.335 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:12:25.335 } 00:12:25.335 21:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:12:25.335 21:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:12:25.335 21:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:12:25.335 21:11:36 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:12:25.335 21:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:25.335 21:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:12:25.593 21:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:12:25.593 21:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:12:25.593 21:11:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:25.852 [2024-07-14 21:11:37.202938] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:25.852 [2024-07-14 21:11:37.203010] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:25.852 [2024-07-14 21:11:37.203037] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105a8434c80 00:12:25.852 [2024-07-14 21:11:37.203044] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:25.852 [2024-07-14 21:11:37.203758] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:25.852 [2024-07-14 21:11:37.203813] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:25.852 [2024-07-14 21:11:37.203837] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:25.852 [2024-07-14 21:11:37.203864] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:25.852 pt1 00:12:25.852 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:25.852 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:25.852 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:25.852 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:25.852 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:25.852 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:25.852 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:25.852 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:25.852 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:25.852 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:25.852 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:25.852 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:26.111 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:26.111 "name": "raid_bdev1", 00:12:26.111 "uuid": "a586b7bd-4225-11ef-aa83-81fbc7dfef58", 00:12:26.111 "strip_size_kb": 0, 00:12:26.111 "state": "configuring", 00:12:26.111 "raid_level": "raid1", 00:12:26.111 "superblock": true, 00:12:26.111 "num_base_bdevs": 3, 00:12:26.111 "num_base_bdevs_discovered": 1, 00:12:26.111 "num_base_bdevs_operational": 3, 00:12:26.111 "base_bdevs_list": [ 00:12:26.111 { 00:12:26.111 "name": "pt1", 00:12:26.111 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:26.111 "is_configured": true, 00:12:26.111 "data_offset": 2048, 00:12:26.111 "data_size": 63488 00:12:26.111 }, 00:12:26.111 { 00:12:26.111 "name": null, 00:12:26.111 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:26.111 "is_configured": false, 00:12:26.111 "data_offset": 2048, 00:12:26.111 "data_size": 63488 00:12:26.111 }, 00:12:26.111 { 00:12:26.111 "name": null, 00:12:26.111 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:26.111 "is_configured": false, 00:12:26.111 "data_offset": 2048, 00:12:26.111 "data_size": 63488 00:12:26.111 } 00:12:26.111 ] 00:12:26.111 }' 00:12:26.111 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:26.111 21:11:37 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:26.370 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 3 -gt 2 ']' 00:12:26.370 21:11:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:26.627 [2024-07-14 21:11:37.998955] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:26.627 [2024-07-14 21:11:37.999028] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:26.627 [2024-07-14 21:11:37.999038] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105a8435680 00:12:26.627 [2024-07-14 21:11:37.999045] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:26.628 [2024-07-14 21:11:37.999174] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:26.628 [2024-07-14 21:11:37.999184] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:26.628 [2024-07-14 21:11:37.999222] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:26.628 [2024-07-14 21:11:37.999247] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:26.628 pt2 00:12:26.628 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:26.886 [2024-07-14 21:11:38.262971] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:12:26.886 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:12:26.886 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:26.886 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:26.886 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:26.886 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:26.886 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:26.886 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:26.886 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:26.886 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:26.886 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:26.886 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:26.886 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.144 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:27.144 "name": "raid_bdev1", 00:12:27.144 "uuid": "a586b7bd-4225-11ef-aa83-81fbc7dfef58", 00:12:27.144 "strip_size_kb": 0, 00:12:27.144 "state": "configuring", 00:12:27.144 "raid_level": "raid1", 00:12:27.144 "superblock": true, 00:12:27.144 "num_base_bdevs": 3, 00:12:27.144 "num_base_bdevs_discovered": 1, 00:12:27.144 "num_base_bdevs_operational": 3, 00:12:27.144 "base_bdevs_list": [ 00:12:27.144 { 00:12:27.144 "name": "pt1", 00:12:27.144 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:27.144 "is_configured": true, 00:12:27.144 "data_offset": 2048, 00:12:27.144 "data_size": 63488 00:12:27.144 }, 00:12:27.144 { 00:12:27.144 "name": null, 00:12:27.144 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:27.144 "is_configured": false, 00:12:27.144 "data_offset": 2048, 00:12:27.144 "data_size": 63488 00:12:27.144 }, 00:12:27.144 { 00:12:27.144 "name": null, 00:12:27.144 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:27.144 "is_configured": false, 00:12:27.144 "data_offset": 2048, 00:12:27.144 "data_size": 63488 00:12:27.144 } 00:12:27.144 ] 00:12:27.144 }' 00:12:27.144 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:27.144 21:11:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:27.402 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:12:27.402 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:27.403 21:11:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:27.660 [2024-07-14 21:11:39.071006] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:27.660 [2024-07-14 21:11:39.071069] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.660 [2024-07-14 21:11:39.071112] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105a8435680 00:12:27.660 [2024-07-14 21:11:39.071119] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.660 [2024-07-14 21:11:39.071250] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.660 [2024-07-14 21:11:39.071268] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:27.660 [2024-07-14 21:11:39.071293] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:27.660 [2024-07-14 21:11:39.071302] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:27.660 pt2 00:12:27.660 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:27.660 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:27.660 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:27.918 [2024-07-14 21:11:39.335015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:27.918 [2024-07-14 21:11:39.335068] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:27.918 [2024-07-14 21:11:39.335096] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105a8435400 00:12:27.918 [2024-07-14 21:11:39.335120] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:27.918 [2024-07-14 21:11:39.335257] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:27.918 [2024-07-14 21:11:39.335275] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:27.918 [2024-07-14 21:11:39.335298] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:27.918 [2024-07-14 21:11:39.335307] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:27.918 [2024-07-14 21:11:39.335336] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x105a8434780 00:12:27.918 [2024-07-14 21:11:39.335341] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:27.918 [2024-07-14 21:11:39.335362] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x105a8497e20 00:12:27.918 [2024-07-14 21:11:39.335428] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x105a8434780 00:12:27.918 [2024-07-14 21:11:39.335433] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x105a8434780 00:12:27.918 [2024-07-14 21:11:39.335456] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:27.918 pt3 00:12:27.918 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:12:27.918 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:12:27.918 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:27.918 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:27.918 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:27.918 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:27.918 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:27.918 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:27.918 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:27.918 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:27.919 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:27.919 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:27.919 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:27.919 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:28.176 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:28.176 "name": "raid_bdev1", 00:12:28.176 "uuid": "a586b7bd-4225-11ef-aa83-81fbc7dfef58", 00:12:28.176 "strip_size_kb": 0, 00:12:28.176 "state": "online", 00:12:28.176 "raid_level": "raid1", 00:12:28.176 "superblock": true, 00:12:28.176 "num_base_bdevs": 3, 00:12:28.176 "num_base_bdevs_discovered": 3, 00:12:28.176 "num_base_bdevs_operational": 3, 00:12:28.176 "base_bdevs_list": [ 00:12:28.176 { 00:12:28.176 "name": "pt1", 00:12:28.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.176 "is_configured": true, 00:12:28.176 "data_offset": 2048, 00:12:28.176 "data_size": 63488 00:12:28.176 }, 00:12:28.176 { 00:12:28.176 "name": "pt2", 00:12:28.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.176 "is_configured": true, 00:12:28.176 "data_offset": 2048, 00:12:28.176 "data_size": 63488 00:12:28.176 }, 00:12:28.176 { 00:12:28.176 "name": "pt3", 00:12:28.176 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.176 "is_configured": true, 00:12:28.176 "data_offset": 2048, 00:12:28.176 "data_size": 63488 00:12:28.176 } 00:12:28.176 ] 00:12:28.176 }' 00:12:28.176 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:28.176 21:11:39 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:28.434 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:12:28.434 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:12:28.434 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:28.434 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:28.434 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:28.434 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:28.434 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:28.434 21:11:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:28.691 [2024-07-14 21:11:40.091079] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:28.691 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:28.691 "name": "raid_bdev1", 00:12:28.691 "aliases": [ 00:12:28.691 "a586b7bd-4225-11ef-aa83-81fbc7dfef58" 00:12:28.691 ], 00:12:28.691 "product_name": "Raid Volume", 00:12:28.691 "block_size": 512, 00:12:28.691 "num_blocks": 63488, 00:12:28.691 "uuid": "a586b7bd-4225-11ef-aa83-81fbc7dfef58", 00:12:28.691 "assigned_rate_limits": { 00:12:28.691 "rw_ios_per_sec": 0, 00:12:28.691 "rw_mbytes_per_sec": 0, 00:12:28.691 "r_mbytes_per_sec": 0, 00:12:28.691 "w_mbytes_per_sec": 0 00:12:28.691 }, 00:12:28.691 "claimed": false, 00:12:28.691 "zoned": false, 00:12:28.691 "supported_io_types": { 00:12:28.691 "read": true, 00:12:28.691 "write": true, 00:12:28.691 "unmap": false, 00:12:28.691 "flush": false, 00:12:28.691 "reset": true, 00:12:28.691 "nvme_admin": false, 00:12:28.691 "nvme_io": false, 00:12:28.691 "nvme_io_md": false, 00:12:28.691 "write_zeroes": true, 00:12:28.691 "zcopy": false, 00:12:28.691 "get_zone_info": false, 00:12:28.691 "zone_management": false, 00:12:28.691 "zone_append": false, 00:12:28.691 "compare": false, 00:12:28.691 "compare_and_write": false, 00:12:28.691 "abort": false, 00:12:28.691 "seek_hole": false, 00:12:28.691 "seek_data": false, 00:12:28.691 "copy": false, 00:12:28.691 "nvme_iov_md": false 00:12:28.691 }, 00:12:28.691 "memory_domains": [ 00:12:28.691 { 00:12:28.691 "dma_device_id": "system", 00:12:28.691 "dma_device_type": 1 00:12:28.691 }, 00:12:28.691 { 00:12:28.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.691 "dma_device_type": 2 00:12:28.691 }, 00:12:28.691 { 00:12:28.691 "dma_device_id": "system", 00:12:28.691 "dma_device_type": 1 00:12:28.691 }, 00:12:28.691 { 00:12:28.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.691 "dma_device_type": 2 00:12:28.691 }, 00:12:28.691 { 00:12:28.691 "dma_device_id": "system", 00:12:28.691 "dma_device_type": 1 00:12:28.691 }, 00:12:28.691 { 00:12:28.691 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.691 "dma_device_type": 2 00:12:28.691 } 00:12:28.691 ], 00:12:28.691 "driver_specific": { 00:12:28.692 "raid": { 00:12:28.692 "uuid": "a586b7bd-4225-11ef-aa83-81fbc7dfef58", 00:12:28.692 "strip_size_kb": 0, 00:12:28.692 "state": "online", 00:12:28.692 "raid_level": "raid1", 00:12:28.692 "superblock": true, 00:12:28.692 "num_base_bdevs": 3, 00:12:28.692 "num_base_bdevs_discovered": 3, 00:12:28.692 "num_base_bdevs_operational": 3, 00:12:28.692 "base_bdevs_list": [ 00:12:28.692 { 00:12:28.692 "name": "pt1", 00:12:28.692 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.692 "is_configured": true, 00:12:28.692 "data_offset": 2048, 00:12:28.692 "data_size": 63488 00:12:28.692 }, 00:12:28.692 { 00:12:28.692 "name": "pt2", 00:12:28.692 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:28.692 "is_configured": true, 00:12:28.692 "data_offset": 2048, 00:12:28.692 "data_size": 63488 00:12:28.692 }, 00:12:28.692 { 00:12:28.692 "name": "pt3", 00:12:28.692 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:28.692 "is_configured": true, 00:12:28.692 "data_offset": 2048, 00:12:28.692 "data_size": 63488 00:12:28.692 } 00:12:28.692 ] 00:12:28.692 } 00:12:28.692 } 00:12:28.692 }' 00:12:28.692 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:28.692 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:12:28.692 pt2 00:12:28.692 pt3' 00:12:28.692 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:28.692 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:12:28.692 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:28.950 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:28.950 "name": "pt1", 00:12:28.950 "aliases": [ 00:12:28.950 "00000000-0000-0000-0000-000000000001" 00:12:28.950 ], 00:12:28.950 "product_name": "passthru", 00:12:28.950 "block_size": 512, 00:12:28.950 "num_blocks": 65536, 00:12:28.950 "uuid": "00000000-0000-0000-0000-000000000001", 00:12:28.950 "assigned_rate_limits": { 00:12:28.950 "rw_ios_per_sec": 0, 00:12:28.950 "rw_mbytes_per_sec": 0, 00:12:28.950 "r_mbytes_per_sec": 0, 00:12:28.950 "w_mbytes_per_sec": 0 00:12:28.950 }, 00:12:28.950 "claimed": true, 00:12:28.950 "claim_type": "exclusive_write", 00:12:28.950 "zoned": false, 00:12:28.950 "supported_io_types": { 00:12:28.950 "read": true, 00:12:28.950 "write": true, 00:12:28.950 "unmap": true, 00:12:28.950 "flush": true, 00:12:28.950 "reset": true, 00:12:28.950 "nvme_admin": false, 00:12:28.950 "nvme_io": false, 00:12:28.950 "nvme_io_md": false, 00:12:28.950 "write_zeroes": true, 00:12:28.950 "zcopy": true, 00:12:28.950 "get_zone_info": false, 00:12:28.950 "zone_management": false, 00:12:28.950 "zone_append": false, 00:12:28.950 "compare": false, 00:12:28.950 "compare_and_write": false, 00:12:28.950 "abort": true, 00:12:28.950 "seek_hole": false, 00:12:28.950 "seek_data": false, 00:12:28.950 "copy": true, 00:12:28.950 "nvme_iov_md": false 00:12:28.950 }, 00:12:28.950 "memory_domains": [ 00:12:28.950 { 00:12:28.950 "dma_device_id": "system", 00:12:28.950 "dma_device_type": 1 00:12:28.950 }, 00:12:28.950 { 00:12:28.950 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:28.950 "dma_device_type": 2 00:12:28.950 } 00:12:28.950 ], 00:12:28.950 "driver_specific": { 00:12:28.950 "passthru": { 00:12:28.950 "name": "pt1", 00:12:28.950 "base_bdev_name": "malloc1" 00:12:28.950 } 00:12:28.950 } 00:12:28.950 }' 00:12:28.950 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.950 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:28.950 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:28.950 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.950 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:28.950 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:28.950 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.950 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:28.950 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:28.951 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.951 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:28.951 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:28.951 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:28.951 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:28.951 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:29.209 "name": "pt2", 00:12:29.209 "aliases": [ 00:12:29.209 "00000000-0000-0000-0000-000000000002" 00:12:29.209 ], 00:12:29.209 "product_name": "passthru", 00:12:29.209 "block_size": 512, 00:12:29.209 "num_blocks": 65536, 00:12:29.209 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:29.209 "assigned_rate_limits": { 00:12:29.209 "rw_ios_per_sec": 0, 00:12:29.209 "rw_mbytes_per_sec": 0, 00:12:29.209 "r_mbytes_per_sec": 0, 00:12:29.209 "w_mbytes_per_sec": 0 00:12:29.209 }, 00:12:29.209 "claimed": true, 00:12:29.209 "claim_type": "exclusive_write", 00:12:29.209 "zoned": false, 00:12:29.209 "supported_io_types": { 00:12:29.209 "read": true, 00:12:29.209 "write": true, 00:12:29.209 "unmap": true, 00:12:29.209 "flush": true, 00:12:29.209 "reset": true, 00:12:29.209 "nvme_admin": false, 00:12:29.209 "nvme_io": false, 00:12:29.209 "nvme_io_md": false, 00:12:29.209 "write_zeroes": true, 00:12:29.209 "zcopy": true, 00:12:29.209 "get_zone_info": false, 00:12:29.209 "zone_management": false, 00:12:29.209 "zone_append": false, 00:12:29.209 "compare": false, 00:12:29.209 "compare_and_write": false, 00:12:29.209 "abort": true, 00:12:29.209 "seek_hole": false, 00:12:29.209 "seek_data": false, 00:12:29.209 "copy": true, 00:12:29.209 "nvme_iov_md": false 00:12:29.209 }, 00:12:29.209 "memory_domains": [ 00:12:29.209 { 00:12:29.209 "dma_device_id": "system", 00:12:29.209 "dma_device_type": 1 00:12:29.209 }, 00:12:29.209 { 00:12:29.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.209 "dma_device_type": 2 00:12:29.209 } 00:12:29.209 ], 00:12:29.209 "driver_specific": { 00:12:29.209 "passthru": { 00:12:29.209 "name": "pt2", 00:12:29.209 "base_bdev_name": "malloc2" 00:12:29.209 } 00:12:29.209 } 00:12:29.209 }' 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:29.209 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:12:29.466 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:29.466 "name": "pt3", 00:12:29.466 "aliases": [ 00:12:29.466 "00000000-0000-0000-0000-000000000003" 00:12:29.466 ], 00:12:29.466 "product_name": "passthru", 00:12:29.466 "block_size": 512, 00:12:29.466 "num_blocks": 65536, 00:12:29.466 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:29.466 "assigned_rate_limits": { 00:12:29.466 "rw_ios_per_sec": 0, 00:12:29.466 "rw_mbytes_per_sec": 0, 00:12:29.466 "r_mbytes_per_sec": 0, 00:12:29.466 "w_mbytes_per_sec": 0 00:12:29.466 }, 00:12:29.466 "claimed": true, 00:12:29.466 "claim_type": "exclusive_write", 00:12:29.466 "zoned": false, 00:12:29.466 "supported_io_types": { 00:12:29.467 "read": true, 00:12:29.467 "write": true, 00:12:29.467 "unmap": true, 00:12:29.467 "flush": true, 00:12:29.467 "reset": true, 00:12:29.467 "nvme_admin": false, 00:12:29.467 "nvme_io": false, 00:12:29.467 "nvme_io_md": false, 00:12:29.467 "write_zeroes": true, 00:12:29.467 "zcopy": true, 00:12:29.467 "get_zone_info": false, 00:12:29.467 "zone_management": false, 00:12:29.467 "zone_append": false, 00:12:29.467 "compare": false, 00:12:29.467 "compare_and_write": false, 00:12:29.467 "abort": true, 00:12:29.467 "seek_hole": false, 00:12:29.467 "seek_data": false, 00:12:29.467 "copy": true, 00:12:29.467 "nvme_iov_md": false 00:12:29.467 }, 00:12:29.467 "memory_domains": [ 00:12:29.467 { 00:12:29.467 "dma_device_id": "system", 00:12:29.467 "dma_device_type": 1 00:12:29.467 }, 00:12:29.467 { 00:12:29.467 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:29.467 "dma_device_type": 2 00:12:29.467 } 00:12:29.467 ], 00:12:29.467 "driver_specific": { 00:12:29.467 "passthru": { 00:12:29.467 "name": "pt3", 00:12:29.467 "base_bdev_name": "malloc3" 00:12:29.467 } 00:12:29.467 } 00:12:29.467 }' 00:12:29.467 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:29.467 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:29.467 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:29.467 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:29.467 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:29.467 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:29.467 21:11:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:29.467 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:29.467 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:29.467 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:29.724 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:29.724 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:29.724 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:12:29.724 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:29.724 [2024-07-14 21:11:41.235151] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:29.724 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' a586b7bd-4225-11ef-aa83-81fbc7dfef58 '!=' a586b7bd-4225-11ef-aa83-81fbc7dfef58 ']' 00:12:29.724 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:12:29.724 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:29.724 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:29.724 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:12:29.983 [2024-07-14 21:11:41.511127] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:12:30.241 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:30.241 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:30.241 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:30.241 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:30.241 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:30.241 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:30.241 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:30.241 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:30.241 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:30.241 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:30.241 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.241 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:30.241 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:30.241 "name": "raid_bdev1", 00:12:30.241 "uuid": "a586b7bd-4225-11ef-aa83-81fbc7dfef58", 00:12:30.241 "strip_size_kb": 0, 00:12:30.241 "state": "online", 00:12:30.241 "raid_level": "raid1", 00:12:30.241 "superblock": true, 00:12:30.241 "num_base_bdevs": 3, 00:12:30.241 "num_base_bdevs_discovered": 2, 00:12:30.241 "num_base_bdevs_operational": 2, 00:12:30.241 "base_bdevs_list": [ 00:12:30.241 { 00:12:30.241 "name": null, 00:12:30.241 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:30.241 "is_configured": false, 00:12:30.241 "data_offset": 2048, 00:12:30.241 "data_size": 63488 00:12:30.241 }, 00:12:30.241 { 00:12:30.241 "name": "pt2", 00:12:30.241 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:30.241 "is_configured": true, 00:12:30.241 "data_offset": 2048, 00:12:30.241 "data_size": 63488 00:12:30.241 }, 00:12:30.241 { 00:12:30.241 "name": "pt3", 00:12:30.241 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:30.241 "is_configured": true, 00:12:30.241 "data_offset": 2048, 00:12:30.241 "data_size": 63488 00:12:30.241 } 00:12:30.241 ] 00:12:30.241 }' 00:12:30.241 21:11:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:30.241 21:11:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:30.807 21:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:30.807 [2024-07-14 21:11:42.303202] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:30.807 [2024-07-14 21:11:42.303222] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:30.807 [2024-07-14 21:11:42.303261] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:30.807 [2024-07-14 21:11:42.303289] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:30.807 [2024-07-14 21:11:42.303293] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x105a8434780 name raid_bdev1, state offline 00:12:30.807 21:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:30.807 21:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:12:31.065 21:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:12:31.065 21:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:12:31.065 21:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:12:31.065 21:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:31.065 21:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:12:31.323 21:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:12:31.323 21:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:31.323 21:11:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:31.582 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:12:31.582 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:12:31.582 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:12:31.582 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:12:31.582 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:12:31.839 [2024-07-14 21:11:43.223253] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:12:31.839 [2024-07-14 21:11:43.223314] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:31.839 [2024-07-14 21:11:43.223341] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105a8435400 00:12:31.839 [2024-07-14 21:11:43.223349] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:31.839 [2024-07-14 21:11:43.224093] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:31.839 [2024-07-14 21:11:43.224133] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:12:31.839 [2024-07-14 21:11:43.224174] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:12:31.839 [2024-07-14 21:11:43.224186] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:31.839 pt2 00:12:31.839 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:31.839 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:31.840 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:31.840 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:31.840 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:31.840 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:31.840 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:31.840 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:31.840 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:31.840 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:31.840 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:31.840 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.097 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:32.097 "name": "raid_bdev1", 00:12:32.097 "uuid": "a586b7bd-4225-11ef-aa83-81fbc7dfef58", 00:12:32.097 "strip_size_kb": 0, 00:12:32.097 "state": "configuring", 00:12:32.097 "raid_level": "raid1", 00:12:32.097 "superblock": true, 00:12:32.097 "num_base_bdevs": 3, 00:12:32.097 "num_base_bdevs_discovered": 1, 00:12:32.097 "num_base_bdevs_operational": 2, 00:12:32.097 "base_bdevs_list": [ 00:12:32.097 { 00:12:32.097 "name": null, 00:12:32.097 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.097 "is_configured": false, 00:12:32.097 "data_offset": 2048, 00:12:32.097 "data_size": 63488 00:12:32.097 }, 00:12:32.097 { 00:12:32.097 "name": "pt2", 00:12:32.098 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.098 "is_configured": true, 00:12:32.098 "data_offset": 2048, 00:12:32.098 "data_size": 63488 00:12:32.098 }, 00:12:32.098 { 00:12:32.098 "name": null, 00:12:32.098 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.098 "is_configured": false, 00:12:32.098 "data_offset": 2048, 00:12:32.098 "data_size": 63488 00:12:32.098 } 00:12:32.098 ] 00:12:32.098 }' 00:12:32.098 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:32.098 21:11:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:32.355 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:12:32.355 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:12:32.355 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=2 00:12:32.355 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:32.614 [2024-07-14 21:11:43.959281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:32.614 [2024-07-14 21:11:43.959341] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:32.614 [2024-07-14 21:11:43.959353] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105a8434780 00:12:32.614 [2024-07-14 21:11:43.959361] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:32.614 [2024-07-14 21:11:43.959490] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:32.614 [2024-07-14 21:11:43.959501] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:32.614 [2024-07-14 21:11:43.959524] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:32.614 [2024-07-14 21:11:43.959537] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:32.614 [2024-07-14 21:11:43.959566] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x105a8435180 00:12:32.614 [2024-07-14 21:11:43.959570] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:32.614 [2024-07-14 21:11:43.959590] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x105a8497e20 00:12:32.614 [2024-07-14 21:11:43.959639] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x105a8435180 00:12:32.614 [2024-07-14 21:11:43.959644] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x105a8435180 00:12:32.614 [2024-07-14 21:11:43.959666] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:32.614 pt3 00:12:32.614 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:32.614 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:32.614 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:32.614 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:32.614 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:32.614 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:32.614 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:32.614 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:32.614 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:32.614 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:32.614 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:32.614 21:11:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:32.890 21:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:32.890 "name": "raid_bdev1", 00:12:32.890 "uuid": "a586b7bd-4225-11ef-aa83-81fbc7dfef58", 00:12:32.890 "strip_size_kb": 0, 00:12:32.890 "state": "online", 00:12:32.890 "raid_level": "raid1", 00:12:32.890 "superblock": true, 00:12:32.890 "num_base_bdevs": 3, 00:12:32.890 "num_base_bdevs_discovered": 2, 00:12:32.890 "num_base_bdevs_operational": 2, 00:12:32.890 "base_bdevs_list": [ 00:12:32.890 { 00:12:32.890 "name": null, 00:12:32.890 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:32.890 "is_configured": false, 00:12:32.890 "data_offset": 2048, 00:12:32.890 "data_size": 63488 00:12:32.890 }, 00:12:32.890 { 00:12:32.890 "name": "pt2", 00:12:32.890 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:32.890 "is_configured": true, 00:12:32.890 "data_offset": 2048, 00:12:32.890 "data_size": 63488 00:12:32.890 }, 00:12:32.890 { 00:12:32.890 "name": "pt3", 00:12:32.890 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:32.890 "is_configured": true, 00:12:32.890 "data_offset": 2048, 00:12:32.890 "data_size": 63488 00:12:32.890 } 00:12:32.890 ] 00:12:32.890 }' 00:12:32.890 21:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:32.890 21:11:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:33.148 21:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:33.406 [2024-07-14 21:11:44.795307] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:33.407 [2024-07-14 21:11:44.795327] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:33.407 [2024-07-14 21:11:44.795380] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:33.407 [2024-07-14 21:11:44.795394] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:33.407 [2024-07-14 21:11:44.795399] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x105a8435180 name raid_bdev1, state offline 00:12:33.407 21:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:12:33.407 21:11:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:33.665 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:12:33.665 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:12:33.665 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 3 -gt 2 ']' 00:12:33.665 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=2 00:12:33.665 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:12:33.924 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:12:33.924 [2024-07-14 21:11:45.463305] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:12:33.924 [2024-07-14 21:11:45.463366] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:33.924 [2024-07-14 21:11:45.463392] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105a8434780 00:12:33.924 [2024-07-14 21:11:45.463399] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:33.924 [2024-07-14 21:11:45.464145] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:33.924 [2024-07-14 21:11:45.464171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:12:33.924 [2024-07-14 21:11:45.464196] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:12:33.924 [2024-07-14 21:11:45.464208] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:12:33.924 [2024-07-14 21:11:45.464238] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:12:33.924 [2024-07-14 21:11:45.464242] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:33.924 [2024-07-14 21:11:45.464248] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x105a8435180 name raid_bdev1, state configuring 00:12:33.924 [2024-07-14 21:11:45.464256] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:12:33.924 pt1 00:12:34.181 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 3 -gt 2 ']' 00:12:34.181 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:12:34.181 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:34.181 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:34.181 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:34.181 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:34.181 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:34.182 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:34.182 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:34.182 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:34.182 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:34.182 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:34.182 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:34.182 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:34.182 "name": "raid_bdev1", 00:12:34.182 "uuid": "a586b7bd-4225-11ef-aa83-81fbc7dfef58", 00:12:34.182 "strip_size_kb": 0, 00:12:34.182 "state": "configuring", 00:12:34.182 "raid_level": "raid1", 00:12:34.182 "superblock": true, 00:12:34.182 "num_base_bdevs": 3, 00:12:34.182 "num_base_bdevs_discovered": 1, 00:12:34.182 "num_base_bdevs_operational": 2, 00:12:34.182 "base_bdevs_list": [ 00:12:34.182 { 00:12:34.182 "name": null, 00:12:34.182 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:34.182 "is_configured": false, 00:12:34.182 "data_offset": 2048, 00:12:34.182 "data_size": 63488 00:12:34.182 }, 00:12:34.182 { 00:12:34.182 "name": "pt2", 00:12:34.182 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:34.182 "is_configured": true, 00:12:34.182 "data_offset": 2048, 00:12:34.182 "data_size": 63488 00:12:34.182 }, 00:12:34.182 { 00:12:34.182 "name": null, 00:12:34.182 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:34.182 "is_configured": false, 00:12:34.182 "data_offset": 2048, 00:12:34.182 "data_size": 63488 00:12:34.182 } 00:12:34.182 ] 00:12:34.182 }' 00:12:34.182 21:11:45 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:34.182 21:11:45 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:34.747 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:12:34.747 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:34.747 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:12:34.747 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:12:35.006 [2024-07-14 21:11:46.523339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:12:35.006 [2024-07-14 21:11:46.523389] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:35.006 [2024-07-14 21:11:46.523401] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x105a8434c80 00:12:35.006 [2024-07-14 21:11:46.523409] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:35.006 [2024-07-14 21:11:46.523534] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:35.006 [2024-07-14 21:11:46.523545] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:12:35.006 [2024-07-14 21:11:46.523568] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:12:35.006 [2024-07-14 21:11:46.523577] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:12:35.006 [2024-07-14 21:11:46.523604] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x105a8435180 00:12:35.006 [2024-07-14 21:11:46.523609] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:35.006 [2024-07-14 21:11:46.523629] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x105a8497e20 00:12:35.006 [2024-07-14 21:11:46.523678] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x105a8435180 00:12:35.006 [2024-07-14 21:11:46.523683] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x105a8435180 00:12:35.006 [2024-07-14 21:11:46.523704] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:35.006 pt3 00:12:35.006 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:35.006 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:35.006 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:35.006 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:35.006 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:35.006 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:35.006 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:35.006 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:35.006 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:35.006 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:35.006 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:35.006 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:35.264 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:35.264 "name": "raid_bdev1", 00:12:35.264 "uuid": "a586b7bd-4225-11ef-aa83-81fbc7dfef58", 00:12:35.264 "strip_size_kb": 0, 00:12:35.264 "state": "online", 00:12:35.264 "raid_level": "raid1", 00:12:35.264 "superblock": true, 00:12:35.264 "num_base_bdevs": 3, 00:12:35.264 "num_base_bdevs_discovered": 2, 00:12:35.264 "num_base_bdevs_operational": 2, 00:12:35.264 "base_bdevs_list": [ 00:12:35.264 { 00:12:35.264 "name": null, 00:12:35.264 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:35.264 "is_configured": false, 00:12:35.264 "data_offset": 2048, 00:12:35.264 "data_size": 63488 00:12:35.264 }, 00:12:35.264 { 00:12:35.264 "name": "pt2", 00:12:35.264 "uuid": "00000000-0000-0000-0000-000000000002", 00:12:35.264 "is_configured": true, 00:12:35.264 "data_offset": 2048, 00:12:35.264 "data_size": 63488 00:12:35.264 }, 00:12:35.264 { 00:12:35.264 "name": "pt3", 00:12:35.264 "uuid": "00000000-0000-0000-0000-000000000003", 00:12:35.264 "is_configured": true, 00:12:35.264 "data_offset": 2048, 00:12:35.264 "data_size": 63488 00:12:35.264 } 00:12:35.264 ] 00:12:35.264 }' 00:12:35.264 21:11:46 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:35.264 21:11:46 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:35.522 21:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:12:35.522 21:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:12:35.780 21:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:12:35.781 21:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:12:35.781 21:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:12:36.040 [2024-07-14 21:11:47.547406] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:36.040 21:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' a586b7bd-4225-11ef-aa83-81fbc7dfef58 '!=' a586b7bd-4225-11ef-aa83-81fbc7dfef58 ']' 00:12:36.040 21:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 57488 00:12:36.040 21:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 57488 ']' 00:12:36.040 21:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 57488 00:12:36.040 21:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:12:36.040 21:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:36.040 21:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 57488 00:12:36.040 21:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:12:36.040 21:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:12:36.040 21:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:12:36.040 killing process with pid 57488 00:12:36.040 21:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 57488' 00:12:36.040 21:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 57488 00:12:36.040 [2024-07-14 21:11:47.572402] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:36.040 [2024-07-14 21:11:47.572422] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:36.040 [2024-07-14 21:11:47.572436] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:36.040 [2024-07-14 21:11:47.572440] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x105a8435180 name raid_bdev1, state offline 00:12:36.040 21:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 57488 00:12:36.300 [2024-07-14 21:11:47.591566] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:36.300 21:11:47 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:12:36.300 00:12:36.300 real 0m17.488s 00:12:36.300 user 0m31.637s 00:12:36.300 sys 0m2.549s 00:12:36.300 21:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:36.300 21:11:47 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.300 ************************************ 00:12:36.300 END TEST raid_superblock_test 00:12:36.300 ************************************ 00:12:36.300 21:11:47 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:36.300 21:11:47 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 3 read 00:12:36.300 21:11:47 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:36.300 21:11:47 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:36.300 21:11:47 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:36.300 ************************************ 00:12:36.300 START TEST raid_read_error_test 00:12:36.300 ************************************ 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 read 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.TY592Wwl3Q 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=58034 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 58034 /var/tmp/spdk-raid.sock 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 58034 ']' 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:36.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:36.300 21:11:47 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:36.300 [2024-07-14 21:11:47.829438] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:36.300 [2024-07-14 21:11:47.829626] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:36.890 EAL: TSC is not safe to use in SMP mode 00:12:36.890 EAL: TSC is not invariant 00:12:36.890 [2024-07-14 21:11:48.364991] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.148 [2024-07-14 21:11:48.454502] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:37.148 [2024-07-14 21:11:48.456809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.148 [2024-07-14 21:11:48.457645] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.148 [2024-07-14 21:11:48.457661] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:37.405 21:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:37.405 21:11:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:37.405 21:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:37.405 21:11:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:37.662 BaseBdev1_malloc 00:12:37.663 21:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:37.920 true 00:12:37.920 21:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:38.178 [2024-07-14 21:11:49.629969] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:38.178 [2024-07-14 21:11:49.630025] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.178 [2024-07-14 21:11:49.630057] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x36150ca34780 00:12:38.178 [2024-07-14 21:11:49.630064] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.178 [2024-07-14 21:11:49.630616] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.178 [2024-07-14 21:11:49.630642] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:38.178 BaseBdev1 00:12:38.178 21:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:38.178 21:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:38.435 BaseBdev2_malloc 00:12:38.435 21:11:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:38.692 true 00:12:38.692 21:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:38.949 [2024-07-14 21:11:50.318000] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:38.949 [2024-07-14 21:11:50.318056] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:38.949 [2024-07-14 21:11:50.318091] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x36150ca34c80 00:12:38.949 [2024-07-14 21:11:50.318099] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:38.949 [2024-07-14 21:11:50.318755] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:38.949 [2024-07-14 21:11:50.318781] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:38.949 BaseBdev2 00:12:38.950 21:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:38.950 21:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:39.207 BaseBdev3_malloc 00:12:39.207 21:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:39.464 true 00:12:39.464 21:11:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:39.720 [2024-07-14 21:11:51.046015] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:39.720 [2024-07-14 21:11:51.046087] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:39.720 [2024-07-14 21:11:51.046126] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x36150ca35180 00:12:39.721 [2024-07-14 21:11:51.046134] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:39.721 [2024-07-14 21:11:51.046827] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:39.721 [2024-07-14 21:11:51.046852] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:39.721 BaseBdev3 00:12:39.721 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:39.977 [2024-07-14 21:11:51.298018] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:39.977 [2024-07-14 21:11:51.298578] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:39.977 [2024-07-14 21:11:51.298603] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:39.977 [2024-07-14 21:11:51.298657] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x36150ca35400 00:12:39.977 [2024-07-14 21:11:51.298663] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:39.977 [2024-07-14 21:11:51.298691] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x36150caa0e20 00:12:39.977 [2024-07-14 21:11:51.298792] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x36150ca35400 00:12:39.977 [2024-07-14 21:11:51.298797] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x36150ca35400 00:12:39.977 [2024-07-14 21:11:51.298821] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:39.977 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:39.977 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:39.977 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:39.977 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:39.977 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:39.977 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:39.977 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:39.977 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:39.977 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:39.977 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:39.977 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:39.977 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:40.234 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:40.234 "name": "raid_bdev1", 00:12:40.234 "uuid": "b06dbe98-4225-11ef-aa83-81fbc7dfef58", 00:12:40.234 "strip_size_kb": 0, 00:12:40.234 "state": "online", 00:12:40.234 "raid_level": "raid1", 00:12:40.234 "superblock": true, 00:12:40.234 "num_base_bdevs": 3, 00:12:40.234 "num_base_bdevs_discovered": 3, 00:12:40.234 "num_base_bdevs_operational": 3, 00:12:40.234 "base_bdevs_list": [ 00:12:40.234 { 00:12:40.234 "name": "BaseBdev1", 00:12:40.234 "uuid": "2f30d73c-4fed-5353-b7d2-a3136e00bc8e", 00:12:40.234 "is_configured": true, 00:12:40.234 "data_offset": 2048, 00:12:40.234 "data_size": 63488 00:12:40.234 }, 00:12:40.234 { 00:12:40.234 "name": "BaseBdev2", 00:12:40.234 "uuid": "21df308c-371b-195e-b5ff-eed105462a92", 00:12:40.234 "is_configured": true, 00:12:40.234 "data_offset": 2048, 00:12:40.234 "data_size": 63488 00:12:40.234 }, 00:12:40.234 { 00:12:40.234 "name": "BaseBdev3", 00:12:40.234 "uuid": "a62ed6ce-9d51-a55e-a900-39e58d135b47", 00:12:40.234 "is_configured": true, 00:12:40.234 "data_offset": 2048, 00:12:40.234 "data_size": 63488 00:12:40.234 } 00:12:40.234 ] 00:12:40.234 }' 00:12:40.234 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:40.234 21:11:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:40.493 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:40.493 21:11:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:40.493 [2024-07-14 21:11:51.942225] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x36150caa0ec0 00:12:41.429 21:11:52 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:12:41.687 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:41.687 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:41.688 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:12:41.688 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=3 00:12:41.688 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:41.688 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:41.688 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:41.688 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:41.688 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:41.688 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:41.688 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:41.688 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:41.688 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:41.688 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:41.688 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:41.688 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:41.947 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:41.947 "name": "raid_bdev1", 00:12:41.947 "uuid": "b06dbe98-4225-11ef-aa83-81fbc7dfef58", 00:12:41.947 "strip_size_kb": 0, 00:12:41.947 "state": "online", 00:12:41.947 "raid_level": "raid1", 00:12:41.947 "superblock": true, 00:12:41.947 "num_base_bdevs": 3, 00:12:41.947 "num_base_bdevs_discovered": 3, 00:12:41.947 "num_base_bdevs_operational": 3, 00:12:41.947 "base_bdevs_list": [ 00:12:41.947 { 00:12:41.947 "name": "BaseBdev1", 00:12:41.947 "uuid": "2f30d73c-4fed-5353-b7d2-a3136e00bc8e", 00:12:41.947 "is_configured": true, 00:12:41.947 "data_offset": 2048, 00:12:41.947 "data_size": 63488 00:12:41.947 }, 00:12:41.947 { 00:12:41.947 "name": "BaseBdev2", 00:12:41.947 "uuid": "21df308c-371b-195e-b5ff-eed105462a92", 00:12:41.947 "is_configured": true, 00:12:41.947 "data_offset": 2048, 00:12:41.947 "data_size": 63488 00:12:41.947 }, 00:12:41.947 { 00:12:41.947 "name": "BaseBdev3", 00:12:41.947 "uuid": "a62ed6ce-9d51-a55e-a900-39e58d135b47", 00:12:41.947 "is_configured": true, 00:12:41.947 "data_offset": 2048, 00:12:41.947 "data_size": 63488 00:12:41.947 } 00:12:41.947 ] 00:12:41.947 }' 00:12:41.947 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:41.947 21:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.206 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:42.465 [2024-07-14 21:11:53.917758] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:42.465 [2024-07-14 21:11:53.917783] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:42.465 [2024-07-14 21:11:53.918119] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:42.465 [2024-07-14 21:11:53.918128] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:42.465 [2024-07-14 21:11:53.918143] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:42.465 [2024-07-14 21:11:53.918147] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x36150ca35400 name raid_bdev1, state offline 00:12:42.465 0 00:12:42.465 21:11:53 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 58034 00:12:42.465 21:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 58034 ']' 00:12:42.465 21:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 58034 00:12:42.465 21:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:12:42.465 21:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:42.465 21:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 58034 00:12:42.465 21:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:12:42.465 21:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:12:42.465 21:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:12:42.465 killing process with pid 58034 00:12:42.465 21:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58034' 00:12:42.465 21:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 58034 00:12:42.465 [2024-07-14 21:11:53.946825] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:42.465 21:11:53 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 58034 00:12:42.465 [2024-07-14 21:11:53.963658] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:42.724 21:11:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.TY592Wwl3Q 00:12:42.724 21:11:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:42.724 21:11:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:42.724 21:11:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:12:42.724 21:11:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:12:42.724 21:11:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:42.724 21:11:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:42.724 21:11:54 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:42.724 00:12:42.724 real 0m6.323s 00:12:42.724 user 0m9.857s 00:12:42.724 sys 0m1.052s 00:12:42.724 21:11:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:42.724 ************************************ 00:12:42.724 END TEST raid_read_error_test 00:12:42.724 ************************************ 00:12:42.724 21:11:54 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.724 21:11:54 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:42.724 21:11:54 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 3 write 00:12:42.724 21:11:54 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:42.725 21:11:54 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:42.725 21:11:54 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:42.725 ************************************ 00:12:42.725 START TEST raid_write_error_test 00:12:42.725 ************************************ 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 3 write 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=3 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3') 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.pqV2WQAjgm 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=58165 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 58165 /var/tmp/spdk-raid.sock 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 58165 ']' 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:42.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:42.725 21:11:54 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:42.725 [2024-07-14 21:11:54.204564] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:42.725 [2024-07-14 21:11:54.204847] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:43.292 EAL: TSC is not safe to use in SMP mode 00:12:43.292 EAL: TSC is not invariant 00:12:43.292 [2024-07-14 21:11:54.727488] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:43.292 [2024-07-14 21:11:54.815725] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:43.292 [2024-07-14 21:11:54.818029] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:43.292 [2024-07-14 21:11:54.818923] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.292 [2024-07-14 21:11:54.818952] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:43.859 21:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:43.859 21:11:55 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:12:43.859 21:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:43.859 21:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:12:43.859 BaseBdev1_malloc 00:12:44.116 21:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:12:44.116 true 00:12:44.373 21:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:12:44.373 [2024-07-14 21:11:55.903607] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:12:44.373 [2024-07-14 21:11:55.903674] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:44.373 [2024-07-14 21:11:55.903714] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b1787e34780 00:12:44.373 [2024-07-14 21:11:55.903722] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:44.373 [2024-07-14 21:11:55.904397] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:44.373 [2024-07-14 21:11:55.904424] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:12:44.373 BaseBdev1 00:12:44.373 21:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:44.373 21:11:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:12:44.632 BaseBdev2_malloc 00:12:44.632 21:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:12:44.890 true 00:12:44.890 21:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:12:45.148 [2024-07-14 21:11:56.563620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:12:45.148 [2024-07-14 21:11:56.563669] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.148 [2024-07-14 21:11:56.563701] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b1787e34c80 00:12:45.148 [2024-07-14 21:11:56.563710] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.148 [2024-07-14 21:11:56.564287] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.148 [2024-07-14 21:11:56.564314] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:12:45.148 BaseBdev2 00:12:45.148 21:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:12:45.148 21:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:12:45.406 BaseBdev3_malloc 00:12:45.406 21:11:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:12:45.665 true 00:12:45.665 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:12:45.922 [2024-07-14 21:11:57.267652] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:12:45.922 [2024-07-14 21:11:57.267704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:12:45.922 [2024-07-14 21:11:57.267740] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1b1787e35180 00:12:45.922 [2024-07-14 21:11:57.267747] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:12:45.922 [2024-07-14 21:11:57.268349] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:12:45.922 [2024-07-14 21:11:57.268375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:12:45.922 BaseBdev3 00:12:45.922 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3' -n raid_bdev1 -s 00:12:46.180 [2024-07-14 21:11:57.523679] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:46.180 [2024-07-14 21:11:57.524252] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:46.180 [2024-07-14 21:11:57.524280] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:46.180 [2024-07-14 21:11:57.524338] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1b1787e35400 00:12:46.180 [2024-07-14 21:11:57.524345] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:12:46.180 [2024-07-14 21:11:57.524373] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b1787ea0e20 00:12:46.180 [2024-07-14 21:11:57.524454] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1b1787e35400 00:12:46.180 [2024-07-14 21:11:57.524460] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1b1787e35400 00:12:46.180 [2024-07-14 21:11:57.524487] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:46.181 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:12:46.181 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:46.181 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:46.181 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:46.181 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:46.181 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:46.181 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:46.181 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:46.181 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:46.181 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:46.181 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:46.181 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:46.439 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:46.439 "name": "raid_bdev1", 00:12:46.439 "uuid": "b423b46f-4225-11ef-aa83-81fbc7dfef58", 00:12:46.439 "strip_size_kb": 0, 00:12:46.439 "state": "online", 00:12:46.439 "raid_level": "raid1", 00:12:46.439 "superblock": true, 00:12:46.439 "num_base_bdevs": 3, 00:12:46.439 "num_base_bdevs_discovered": 3, 00:12:46.439 "num_base_bdevs_operational": 3, 00:12:46.439 "base_bdevs_list": [ 00:12:46.439 { 00:12:46.439 "name": "BaseBdev1", 00:12:46.439 "uuid": "4d7f2166-f287-ee51-8d92-515040229e6e", 00:12:46.439 "is_configured": true, 00:12:46.439 "data_offset": 2048, 00:12:46.439 "data_size": 63488 00:12:46.439 }, 00:12:46.439 { 00:12:46.439 "name": "BaseBdev2", 00:12:46.439 "uuid": "eedbed7d-cc69-6451-b4c9-e3f36c6f1e23", 00:12:46.439 "is_configured": true, 00:12:46.439 "data_offset": 2048, 00:12:46.439 "data_size": 63488 00:12:46.439 }, 00:12:46.439 { 00:12:46.439 "name": "BaseBdev3", 00:12:46.439 "uuid": "1d14870b-e820-2f53-a39a-5bbe0805aee0", 00:12:46.439 "is_configured": true, 00:12:46.439 "data_offset": 2048, 00:12:46.439 "data_size": 63488 00:12:46.439 } 00:12:46.439 ] 00:12:46.439 }' 00:12:46.439 21:11:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:46.439 21:11:57 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:46.698 21:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:12:46.698 21:11:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:12:46.698 [2024-07-14 21:11:58.103850] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1b1787ea0ec0 00:12:47.635 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:12:47.894 [2024-07-14 21:11:59.343789] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:12:47.894 [2024-07-14 21:11:59.343847] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:47.894 [2024-07-14 21:11:59.343987] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1b1787ea0ec0 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=2 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:47.894 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:12:48.152 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:48.152 "name": "raid_bdev1", 00:12:48.152 "uuid": "b423b46f-4225-11ef-aa83-81fbc7dfef58", 00:12:48.152 "strip_size_kb": 0, 00:12:48.152 "state": "online", 00:12:48.152 "raid_level": "raid1", 00:12:48.152 "superblock": true, 00:12:48.152 "num_base_bdevs": 3, 00:12:48.152 "num_base_bdevs_discovered": 2, 00:12:48.152 "num_base_bdevs_operational": 2, 00:12:48.152 "base_bdevs_list": [ 00:12:48.152 { 00:12:48.152 "name": null, 00:12:48.152 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:48.152 "is_configured": false, 00:12:48.152 "data_offset": 2048, 00:12:48.152 "data_size": 63488 00:12:48.152 }, 00:12:48.152 { 00:12:48.152 "name": "BaseBdev2", 00:12:48.152 "uuid": "eedbed7d-cc69-6451-b4c9-e3f36c6f1e23", 00:12:48.152 "is_configured": true, 00:12:48.152 "data_offset": 2048, 00:12:48.152 "data_size": 63488 00:12:48.152 }, 00:12:48.152 { 00:12:48.152 "name": "BaseBdev3", 00:12:48.152 "uuid": "1d14870b-e820-2f53-a39a-5bbe0805aee0", 00:12:48.152 "is_configured": true, 00:12:48.152 "data_offset": 2048, 00:12:48.152 "data_size": 63488 00:12:48.152 } 00:12:48.152 ] 00:12:48.152 }' 00:12:48.152 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:48.152 21:11:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.409 21:11:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:12:48.667 [2024-07-14 21:12:00.210710] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:12:48.667 [2024-07-14 21:12:00.210739] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:48.667 [2024-07-14 21:12:00.211118] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:48.667 [2024-07-14 21:12:00.211143] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:48.667 [2024-07-14 21:12:00.211171] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:12:48.667 [2024-07-14 21:12:00.211175] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1b1787e35400 name raid_bdev1, state offline 00:12:48.926 0 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 58165 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 58165 ']' 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 58165 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 58165 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:12:48.926 killing process with pid 58165 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58165' 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 58165 00:12:48.926 [2024-07-14 21:12:00.240362] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 58165 00:12:48.926 [2024-07-14 21:12:00.257275] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.pqV2WQAjgm 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:12:48.926 00:12:48.926 real 0m6.260s 00:12:48.926 user 0m9.696s 00:12:48.926 sys 0m1.086s 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:48.926 21:12:00 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:12:48.926 ************************************ 00:12:48.926 END TEST raid_write_error_test 00:12:48.926 ************************************ 00:12:49.184 21:12:00 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:12:49.184 21:12:00 bdev_raid -- bdev/bdev_raid.sh@865 -- # for n in {2..4} 00:12:49.184 21:12:00 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:12:49.184 21:12:00 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid0 4 false 00:12:49.184 21:12:00 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:12:49.184 21:12:00 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:49.184 21:12:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:12:49.184 ************************************ 00:12:49.184 START TEST raid_state_function_test 00:12:49.184 ************************************ 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 false 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=58294 00:12:49.184 Process raid pid: 58294 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 58294' 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 58294 /var/tmp/spdk-raid.sock 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 58294 ']' 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:49.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:49.184 21:12:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:49.184 [2024-07-14 21:12:00.508871] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:12:49.184 [2024-07-14 21:12:00.509044] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:12:49.750 EAL: TSC is not safe to use in SMP mode 00:12:49.750 EAL: TSC is not invariant 00:12:49.750 [2024-07-14 21:12:01.161869] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:49.750 [2024-07-14 21:12:01.263045] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:12:49.750 [2024-07-14 21:12:01.265524] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.750 [2024-07-14 21:12:01.266343] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:49.750 [2024-07-14 21:12:01.266358] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:12:50.315 21:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:50.315 21:12:01 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:12:50.315 21:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:50.315 [2024-07-14 21:12:01.825347] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:50.315 [2024-07-14 21:12:01.825401] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:50.315 [2024-07-14 21:12:01.825406] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:50.315 [2024-07-14 21:12:01.825427] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:50.315 [2024-07-14 21:12:01.825430] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:50.315 [2024-07-14 21:12:01.825436] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:50.316 [2024-07-14 21:12:01.825439] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:50.316 [2024-07-14 21:12:01.825446] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:50.316 21:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:50.316 21:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:50.316 21:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:50.316 21:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:50.316 21:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:50.316 21:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:50.316 21:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:50.316 21:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:50.316 21:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:50.316 21:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:50.316 21:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:50.316 21:12:01 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:50.573 21:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:50.573 "name": "Existed_Raid", 00:12:50.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.573 "strip_size_kb": 64, 00:12:50.573 "state": "configuring", 00:12:50.573 "raid_level": "raid0", 00:12:50.573 "superblock": false, 00:12:50.573 "num_base_bdevs": 4, 00:12:50.573 "num_base_bdevs_discovered": 0, 00:12:50.573 "num_base_bdevs_operational": 4, 00:12:50.573 "base_bdevs_list": [ 00:12:50.573 { 00:12:50.573 "name": "BaseBdev1", 00:12:50.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.573 "is_configured": false, 00:12:50.573 "data_offset": 0, 00:12:50.573 "data_size": 0 00:12:50.573 }, 00:12:50.573 { 00:12:50.573 "name": "BaseBdev2", 00:12:50.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.573 "is_configured": false, 00:12:50.573 "data_offset": 0, 00:12:50.573 "data_size": 0 00:12:50.573 }, 00:12:50.573 { 00:12:50.573 "name": "BaseBdev3", 00:12:50.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.573 "is_configured": false, 00:12:50.573 "data_offset": 0, 00:12:50.573 "data_size": 0 00:12:50.573 }, 00:12:50.573 { 00:12:50.573 "name": "BaseBdev4", 00:12:50.573 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:50.573 "is_configured": false, 00:12:50.573 "data_offset": 0, 00:12:50.573 "data_size": 0 00:12:50.573 } 00:12:50.573 ] 00:12:50.573 }' 00:12:50.573 21:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:50.573 21:12:02 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:51.137 21:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:51.137 [2024-07-14 21:12:02.653419] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:51.137 [2024-07-14 21:12:02.653450] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x209dcfc34500 name Existed_Raid, state configuring 00:12:51.137 21:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:51.395 [2024-07-14 21:12:02.861436] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:12:51.395 [2024-07-14 21:12:02.861498] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:12:51.395 [2024-07-14 21:12:02.861503] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:51.395 [2024-07-14 21:12:02.861509] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:51.395 [2024-07-14 21:12:02.861512] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:51.395 [2024-07-14 21:12:02.861518] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:51.395 [2024-07-14 21:12:02.861520] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:51.395 [2024-07-14 21:12:02.861526] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:51.395 21:12:02 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:12:51.653 [2024-07-14 21:12:03.070625] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:51.653 BaseBdev1 00:12:51.653 21:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:12:51.653 21:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:12:51.653 21:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:51.653 21:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:51.653 21:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:51.653 21:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:51.653 21:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:51.910 21:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:12:52.168 [ 00:12:52.168 { 00:12:52.168 "name": "BaseBdev1", 00:12:52.168 "aliases": [ 00:12:52.168 "b771ee85-4225-11ef-aa83-81fbc7dfef58" 00:12:52.168 ], 00:12:52.168 "product_name": "Malloc disk", 00:12:52.168 "block_size": 512, 00:12:52.168 "num_blocks": 65536, 00:12:52.168 "uuid": "b771ee85-4225-11ef-aa83-81fbc7dfef58", 00:12:52.168 "assigned_rate_limits": { 00:12:52.168 "rw_ios_per_sec": 0, 00:12:52.168 "rw_mbytes_per_sec": 0, 00:12:52.168 "r_mbytes_per_sec": 0, 00:12:52.168 "w_mbytes_per_sec": 0 00:12:52.168 }, 00:12:52.168 "claimed": true, 00:12:52.168 "claim_type": "exclusive_write", 00:12:52.168 "zoned": false, 00:12:52.168 "supported_io_types": { 00:12:52.168 "read": true, 00:12:52.168 "write": true, 00:12:52.168 "unmap": true, 00:12:52.168 "flush": true, 00:12:52.168 "reset": true, 00:12:52.168 "nvme_admin": false, 00:12:52.168 "nvme_io": false, 00:12:52.168 "nvme_io_md": false, 00:12:52.168 "write_zeroes": true, 00:12:52.168 "zcopy": true, 00:12:52.168 "get_zone_info": false, 00:12:52.168 "zone_management": false, 00:12:52.168 "zone_append": false, 00:12:52.168 "compare": false, 00:12:52.168 "compare_and_write": false, 00:12:52.168 "abort": true, 00:12:52.168 "seek_hole": false, 00:12:52.168 "seek_data": false, 00:12:52.168 "copy": true, 00:12:52.168 "nvme_iov_md": false 00:12:52.168 }, 00:12:52.168 "memory_domains": [ 00:12:52.168 { 00:12:52.168 "dma_device_id": "system", 00:12:52.168 "dma_device_type": 1 00:12:52.168 }, 00:12:52.168 { 00:12:52.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:52.168 "dma_device_type": 2 00:12:52.168 } 00:12:52.168 ], 00:12:52.168 "driver_specific": {} 00:12:52.168 } 00:12:52.168 ] 00:12:52.168 21:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:52.168 21:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:52.168 21:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:52.168 21:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:52.168 21:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:52.168 21:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:52.168 21:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:52.168 21:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:52.168 21:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:52.168 21:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:52.168 21:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:52.168 21:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:52.168 21:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:52.425 21:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:52.425 "name": "Existed_Raid", 00:12:52.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.425 "strip_size_kb": 64, 00:12:52.425 "state": "configuring", 00:12:52.425 "raid_level": "raid0", 00:12:52.425 "superblock": false, 00:12:52.425 "num_base_bdevs": 4, 00:12:52.425 "num_base_bdevs_discovered": 1, 00:12:52.425 "num_base_bdevs_operational": 4, 00:12:52.425 "base_bdevs_list": [ 00:12:52.425 { 00:12:52.425 "name": "BaseBdev1", 00:12:52.425 "uuid": "b771ee85-4225-11ef-aa83-81fbc7dfef58", 00:12:52.425 "is_configured": true, 00:12:52.425 "data_offset": 0, 00:12:52.425 "data_size": 65536 00:12:52.425 }, 00:12:52.425 { 00:12:52.425 "name": "BaseBdev2", 00:12:52.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.425 "is_configured": false, 00:12:52.425 "data_offset": 0, 00:12:52.425 "data_size": 0 00:12:52.425 }, 00:12:52.425 { 00:12:52.425 "name": "BaseBdev3", 00:12:52.425 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.425 "is_configured": false, 00:12:52.425 "data_offset": 0, 00:12:52.426 "data_size": 0 00:12:52.426 }, 00:12:52.426 { 00:12:52.426 "name": "BaseBdev4", 00:12:52.426 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:52.426 "is_configured": false, 00:12:52.426 "data_offset": 0, 00:12:52.426 "data_size": 0 00:12:52.426 } 00:12:52.426 ] 00:12:52.426 }' 00:12:52.426 21:12:03 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:52.426 21:12:03 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:52.693 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:12:52.962 [2024-07-14 21:12:04.445603] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:12:52.962 [2024-07-14 21:12:04.445656] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x209dcfc34500 name Existed_Raid, state configuring 00:12:52.962 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:12:53.220 [2024-07-14 21:12:04.653611] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:12:53.220 [2024-07-14 21:12:04.654689] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:12:53.220 [2024-07-14 21:12:04.654744] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:12:53.220 [2024-07-14 21:12:04.654748] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:12:53.220 [2024-07-14 21:12:04.654769] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:12:53.220 [2024-07-14 21:12:04.654772] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:12:53.220 [2024-07-14 21:12:04.654778] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:12:53.220 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:12:53.220 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:53.220 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:53.220 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:53.220 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:53.220 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:53.220 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:53.220 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:53.220 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:53.220 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:53.220 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:53.220 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:53.220 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:53.220 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:53.478 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:53.478 "name": "Existed_Raid", 00:12:53.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.478 "strip_size_kb": 64, 00:12:53.478 "state": "configuring", 00:12:53.478 "raid_level": "raid0", 00:12:53.478 "superblock": false, 00:12:53.478 "num_base_bdevs": 4, 00:12:53.478 "num_base_bdevs_discovered": 1, 00:12:53.478 "num_base_bdevs_operational": 4, 00:12:53.478 "base_bdevs_list": [ 00:12:53.478 { 00:12:53.478 "name": "BaseBdev1", 00:12:53.478 "uuid": "b771ee85-4225-11ef-aa83-81fbc7dfef58", 00:12:53.478 "is_configured": true, 00:12:53.478 "data_offset": 0, 00:12:53.478 "data_size": 65536 00:12:53.478 }, 00:12:53.478 { 00:12:53.478 "name": "BaseBdev2", 00:12:53.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.478 "is_configured": false, 00:12:53.478 "data_offset": 0, 00:12:53.478 "data_size": 0 00:12:53.478 }, 00:12:53.478 { 00:12:53.478 "name": "BaseBdev3", 00:12:53.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.478 "is_configured": false, 00:12:53.478 "data_offset": 0, 00:12:53.478 "data_size": 0 00:12:53.478 }, 00:12:53.478 { 00:12:53.478 "name": "BaseBdev4", 00:12:53.478 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:53.478 "is_configured": false, 00:12:53.478 "data_offset": 0, 00:12:53.478 "data_size": 0 00:12:53.478 } 00:12:53.478 ] 00:12:53.478 }' 00:12:53.478 21:12:04 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:53.478 21:12:04 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:53.735 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:12:53.993 [2024-07-14 21:12:05.437770] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:12:53.993 BaseBdev2 00:12:53.993 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:12:53.993 21:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:12:53.993 21:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:53.993 21:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:53.993 21:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:53.993 21:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:53.993 21:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:54.251 21:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:12:54.510 [ 00:12:54.510 { 00:12:54.510 "name": "BaseBdev2", 00:12:54.510 "aliases": [ 00:12:54.510 "b8db46b7-4225-11ef-aa83-81fbc7dfef58" 00:12:54.510 ], 00:12:54.510 "product_name": "Malloc disk", 00:12:54.510 "block_size": 512, 00:12:54.510 "num_blocks": 65536, 00:12:54.510 "uuid": "b8db46b7-4225-11ef-aa83-81fbc7dfef58", 00:12:54.510 "assigned_rate_limits": { 00:12:54.510 "rw_ios_per_sec": 0, 00:12:54.510 "rw_mbytes_per_sec": 0, 00:12:54.510 "r_mbytes_per_sec": 0, 00:12:54.510 "w_mbytes_per_sec": 0 00:12:54.510 }, 00:12:54.510 "claimed": true, 00:12:54.510 "claim_type": "exclusive_write", 00:12:54.510 "zoned": false, 00:12:54.510 "supported_io_types": { 00:12:54.510 "read": true, 00:12:54.510 "write": true, 00:12:54.510 "unmap": true, 00:12:54.510 "flush": true, 00:12:54.510 "reset": true, 00:12:54.510 "nvme_admin": false, 00:12:54.510 "nvme_io": false, 00:12:54.510 "nvme_io_md": false, 00:12:54.510 "write_zeroes": true, 00:12:54.510 "zcopy": true, 00:12:54.510 "get_zone_info": false, 00:12:54.510 "zone_management": false, 00:12:54.510 "zone_append": false, 00:12:54.510 "compare": false, 00:12:54.510 "compare_and_write": false, 00:12:54.510 "abort": true, 00:12:54.510 "seek_hole": false, 00:12:54.510 "seek_data": false, 00:12:54.510 "copy": true, 00:12:54.510 "nvme_iov_md": false 00:12:54.510 }, 00:12:54.510 "memory_domains": [ 00:12:54.510 { 00:12:54.510 "dma_device_id": "system", 00:12:54.510 "dma_device_type": 1 00:12:54.510 }, 00:12:54.510 { 00:12:54.510 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:54.510 "dma_device_type": 2 00:12:54.510 } 00:12:54.510 ], 00:12:54.510 "driver_specific": {} 00:12:54.510 } 00:12:54.510 ] 00:12:54.510 21:12:05 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:54.510 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:54.510 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:54.510 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:54.510 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:54.510 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:54.510 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:54.510 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:54.510 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:54.510 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:54.510 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:54.510 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:54.510 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:54.510 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:54.510 21:12:05 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:54.768 21:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:54.768 "name": "Existed_Raid", 00:12:54.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.768 "strip_size_kb": 64, 00:12:54.768 "state": "configuring", 00:12:54.768 "raid_level": "raid0", 00:12:54.768 "superblock": false, 00:12:54.768 "num_base_bdevs": 4, 00:12:54.768 "num_base_bdevs_discovered": 2, 00:12:54.768 "num_base_bdevs_operational": 4, 00:12:54.768 "base_bdevs_list": [ 00:12:54.768 { 00:12:54.768 "name": "BaseBdev1", 00:12:54.768 "uuid": "b771ee85-4225-11ef-aa83-81fbc7dfef58", 00:12:54.768 "is_configured": true, 00:12:54.768 "data_offset": 0, 00:12:54.768 "data_size": 65536 00:12:54.768 }, 00:12:54.768 { 00:12:54.768 "name": "BaseBdev2", 00:12:54.768 "uuid": "b8db46b7-4225-11ef-aa83-81fbc7dfef58", 00:12:54.768 "is_configured": true, 00:12:54.768 "data_offset": 0, 00:12:54.768 "data_size": 65536 00:12:54.768 }, 00:12:54.768 { 00:12:54.768 "name": "BaseBdev3", 00:12:54.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.768 "is_configured": false, 00:12:54.768 "data_offset": 0, 00:12:54.768 "data_size": 0 00:12:54.768 }, 00:12:54.768 { 00:12:54.768 "name": "BaseBdev4", 00:12:54.768 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:54.768 "is_configured": false, 00:12:54.768 "data_offset": 0, 00:12:54.768 "data_size": 0 00:12:54.768 } 00:12:54.768 ] 00:12:54.768 }' 00:12:54.768 21:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:54.768 21:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:55.027 21:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:12:55.285 [2024-07-14 21:12:06.721681] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:12:55.285 BaseBdev3 00:12:55.285 21:12:06 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:12:55.285 21:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:12:55.285 21:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:55.285 21:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:55.285 21:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:55.285 21:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:55.285 21:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:55.542 21:12:06 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:12:55.799 [ 00:12:55.799 { 00:12:55.799 "name": "BaseBdev3", 00:12:55.799 "aliases": [ 00:12:55.799 "b99f3368-4225-11ef-aa83-81fbc7dfef58" 00:12:55.799 ], 00:12:55.799 "product_name": "Malloc disk", 00:12:55.800 "block_size": 512, 00:12:55.800 "num_blocks": 65536, 00:12:55.800 "uuid": "b99f3368-4225-11ef-aa83-81fbc7dfef58", 00:12:55.800 "assigned_rate_limits": { 00:12:55.800 "rw_ios_per_sec": 0, 00:12:55.800 "rw_mbytes_per_sec": 0, 00:12:55.800 "r_mbytes_per_sec": 0, 00:12:55.800 "w_mbytes_per_sec": 0 00:12:55.800 }, 00:12:55.800 "claimed": true, 00:12:55.800 "claim_type": "exclusive_write", 00:12:55.800 "zoned": false, 00:12:55.800 "supported_io_types": { 00:12:55.800 "read": true, 00:12:55.800 "write": true, 00:12:55.800 "unmap": true, 00:12:55.800 "flush": true, 00:12:55.800 "reset": true, 00:12:55.800 "nvme_admin": false, 00:12:55.800 "nvme_io": false, 00:12:55.800 "nvme_io_md": false, 00:12:55.800 "write_zeroes": true, 00:12:55.800 "zcopy": true, 00:12:55.800 "get_zone_info": false, 00:12:55.800 "zone_management": false, 00:12:55.800 "zone_append": false, 00:12:55.800 "compare": false, 00:12:55.800 "compare_and_write": false, 00:12:55.800 "abort": true, 00:12:55.800 "seek_hole": false, 00:12:55.800 "seek_data": false, 00:12:55.800 "copy": true, 00:12:55.800 "nvme_iov_md": false 00:12:55.800 }, 00:12:55.800 "memory_domains": [ 00:12:55.800 { 00:12:55.800 "dma_device_id": "system", 00:12:55.800 "dma_device_type": 1 00:12:55.800 }, 00:12:55.800 { 00:12:55.800 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:55.800 "dma_device_type": 2 00:12:55.800 } 00:12:55.800 ], 00:12:55.800 "driver_specific": {} 00:12:55.800 } 00:12:55.800 ] 00:12:55.800 21:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:55.800 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:55.800 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:55.800 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:12:55.800 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:55.800 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:12:55.800 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:55.800 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:55.800 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:55.800 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:55.800 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:55.800 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:55.800 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:55.800 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:55.800 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:56.057 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:56.057 "name": "Existed_Raid", 00:12:56.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.057 "strip_size_kb": 64, 00:12:56.057 "state": "configuring", 00:12:56.057 "raid_level": "raid0", 00:12:56.057 "superblock": false, 00:12:56.057 "num_base_bdevs": 4, 00:12:56.057 "num_base_bdevs_discovered": 3, 00:12:56.057 "num_base_bdevs_operational": 4, 00:12:56.057 "base_bdevs_list": [ 00:12:56.057 { 00:12:56.057 "name": "BaseBdev1", 00:12:56.057 "uuid": "b771ee85-4225-11ef-aa83-81fbc7dfef58", 00:12:56.057 "is_configured": true, 00:12:56.057 "data_offset": 0, 00:12:56.057 "data_size": 65536 00:12:56.057 }, 00:12:56.057 { 00:12:56.057 "name": "BaseBdev2", 00:12:56.057 "uuid": "b8db46b7-4225-11ef-aa83-81fbc7dfef58", 00:12:56.057 "is_configured": true, 00:12:56.057 "data_offset": 0, 00:12:56.057 "data_size": 65536 00:12:56.057 }, 00:12:56.057 { 00:12:56.057 "name": "BaseBdev3", 00:12:56.057 "uuid": "b99f3368-4225-11ef-aa83-81fbc7dfef58", 00:12:56.057 "is_configured": true, 00:12:56.057 "data_offset": 0, 00:12:56.057 "data_size": 65536 00:12:56.057 }, 00:12:56.057 { 00:12:56.057 "name": "BaseBdev4", 00:12:56.057 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:56.057 "is_configured": false, 00:12:56.057 "data_offset": 0, 00:12:56.057 "data_size": 0 00:12:56.057 } 00:12:56.057 ] 00:12:56.057 }' 00:12:56.057 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:56.057 21:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:56.314 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:12:56.573 [2024-07-14 21:12:07.977698] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:12:56.573 [2024-07-14 21:12:07.977714] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x209dcfc34a00 00:12:56.573 [2024-07-14 21:12:07.977718] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:12:56.573 [2024-07-14 21:12:07.977754] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x209dcfc97e20 00:12:56.573 [2024-07-14 21:12:07.977849] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x209dcfc34a00 00:12:56.573 [2024-07-14 21:12:07.977853] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x209dcfc34a00 00:12:56.573 [2024-07-14 21:12:07.977879] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:12:56.573 BaseBdev4 00:12:56.573 21:12:07 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:12:56.573 21:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:12:56.573 21:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:12:56.573 21:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:12:56.573 21:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:12:56.573 21:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:12:56.573 21:12:07 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:12:56.831 21:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:12:57.089 [ 00:12:57.089 { 00:12:57.089 "name": "BaseBdev4", 00:12:57.089 "aliases": [ 00:12:57.089 "ba5eda99-4225-11ef-aa83-81fbc7dfef58" 00:12:57.089 ], 00:12:57.089 "product_name": "Malloc disk", 00:12:57.089 "block_size": 512, 00:12:57.089 "num_blocks": 65536, 00:12:57.089 "uuid": "ba5eda99-4225-11ef-aa83-81fbc7dfef58", 00:12:57.089 "assigned_rate_limits": { 00:12:57.089 "rw_ios_per_sec": 0, 00:12:57.089 "rw_mbytes_per_sec": 0, 00:12:57.089 "r_mbytes_per_sec": 0, 00:12:57.089 "w_mbytes_per_sec": 0 00:12:57.089 }, 00:12:57.089 "claimed": true, 00:12:57.089 "claim_type": "exclusive_write", 00:12:57.089 "zoned": false, 00:12:57.089 "supported_io_types": { 00:12:57.089 "read": true, 00:12:57.089 "write": true, 00:12:57.089 "unmap": true, 00:12:57.089 "flush": true, 00:12:57.089 "reset": true, 00:12:57.090 "nvme_admin": false, 00:12:57.090 "nvme_io": false, 00:12:57.090 "nvme_io_md": false, 00:12:57.090 "write_zeroes": true, 00:12:57.090 "zcopy": true, 00:12:57.090 "get_zone_info": false, 00:12:57.090 "zone_management": false, 00:12:57.090 "zone_append": false, 00:12:57.090 "compare": false, 00:12:57.090 "compare_and_write": false, 00:12:57.090 "abort": true, 00:12:57.090 "seek_hole": false, 00:12:57.090 "seek_data": false, 00:12:57.090 "copy": true, 00:12:57.090 "nvme_iov_md": false 00:12:57.090 }, 00:12:57.090 "memory_domains": [ 00:12:57.090 { 00:12:57.090 "dma_device_id": "system", 00:12:57.090 "dma_device_type": 1 00:12:57.090 }, 00:12:57.090 { 00:12:57.090 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.090 "dma_device_type": 2 00:12:57.090 } 00:12:57.090 ], 00:12:57.090 "driver_specific": {} 00:12:57.090 } 00:12:57.090 ] 00:12:57.090 21:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:12:57.090 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:12:57.090 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:12:57.090 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:12:57.090 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:57.090 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:12:57.090 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:57.090 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:57.090 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:12:57.090 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:57.090 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:57.090 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:57.090 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:57.090 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:57.090 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:57.348 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:57.348 "name": "Existed_Raid", 00:12:57.348 "uuid": "ba5ede74-4225-11ef-aa83-81fbc7dfef58", 00:12:57.348 "strip_size_kb": 64, 00:12:57.348 "state": "online", 00:12:57.348 "raid_level": "raid0", 00:12:57.348 "superblock": false, 00:12:57.348 "num_base_bdevs": 4, 00:12:57.348 "num_base_bdevs_discovered": 4, 00:12:57.348 "num_base_bdevs_operational": 4, 00:12:57.348 "base_bdevs_list": [ 00:12:57.348 { 00:12:57.348 "name": "BaseBdev1", 00:12:57.348 "uuid": "b771ee85-4225-11ef-aa83-81fbc7dfef58", 00:12:57.348 "is_configured": true, 00:12:57.348 "data_offset": 0, 00:12:57.348 "data_size": 65536 00:12:57.348 }, 00:12:57.348 { 00:12:57.348 "name": "BaseBdev2", 00:12:57.348 "uuid": "b8db46b7-4225-11ef-aa83-81fbc7dfef58", 00:12:57.348 "is_configured": true, 00:12:57.348 "data_offset": 0, 00:12:57.348 "data_size": 65536 00:12:57.348 }, 00:12:57.348 { 00:12:57.348 "name": "BaseBdev3", 00:12:57.348 "uuid": "b99f3368-4225-11ef-aa83-81fbc7dfef58", 00:12:57.348 "is_configured": true, 00:12:57.348 "data_offset": 0, 00:12:57.348 "data_size": 65536 00:12:57.348 }, 00:12:57.348 { 00:12:57.348 "name": "BaseBdev4", 00:12:57.348 "uuid": "ba5eda99-4225-11ef-aa83-81fbc7dfef58", 00:12:57.348 "is_configured": true, 00:12:57.348 "data_offset": 0, 00:12:57.348 "data_size": 65536 00:12:57.348 } 00:12:57.348 ] 00:12:57.348 }' 00:12:57.348 21:12:08 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:57.348 21:12:08 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:57.606 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:12:57.606 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:12:57.606 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:12:57.606 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:12:57.606 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:12:57.606 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:12:57.606 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:12:57.606 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:12:57.864 [2024-07-14 21:12:09.209676] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:12:57.864 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:12:57.864 "name": "Existed_Raid", 00:12:57.864 "aliases": [ 00:12:57.864 "ba5ede74-4225-11ef-aa83-81fbc7dfef58" 00:12:57.864 ], 00:12:57.864 "product_name": "Raid Volume", 00:12:57.864 "block_size": 512, 00:12:57.864 "num_blocks": 262144, 00:12:57.864 "uuid": "ba5ede74-4225-11ef-aa83-81fbc7dfef58", 00:12:57.864 "assigned_rate_limits": { 00:12:57.864 "rw_ios_per_sec": 0, 00:12:57.864 "rw_mbytes_per_sec": 0, 00:12:57.864 "r_mbytes_per_sec": 0, 00:12:57.864 "w_mbytes_per_sec": 0 00:12:57.864 }, 00:12:57.864 "claimed": false, 00:12:57.864 "zoned": false, 00:12:57.864 "supported_io_types": { 00:12:57.864 "read": true, 00:12:57.864 "write": true, 00:12:57.864 "unmap": true, 00:12:57.864 "flush": true, 00:12:57.864 "reset": true, 00:12:57.864 "nvme_admin": false, 00:12:57.864 "nvme_io": false, 00:12:57.864 "nvme_io_md": false, 00:12:57.864 "write_zeroes": true, 00:12:57.864 "zcopy": false, 00:12:57.864 "get_zone_info": false, 00:12:57.864 "zone_management": false, 00:12:57.864 "zone_append": false, 00:12:57.864 "compare": false, 00:12:57.864 "compare_and_write": false, 00:12:57.864 "abort": false, 00:12:57.864 "seek_hole": false, 00:12:57.864 "seek_data": false, 00:12:57.864 "copy": false, 00:12:57.864 "nvme_iov_md": false 00:12:57.864 }, 00:12:57.864 "memory_domains": [ 00:12:57.864 { 00:12:57.864 "dma_device_id": "system", 00:12:57.864 "dma_device_type": 1 00:12:57.864 }, 00:12:57.864 { 00:12:57.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.864 "dma_device_type": 2 00:12:57.864 }, 00:12:57.864 { 00:12:57.864 "dma_device_id": "system", 00:12:57.864 "dma_device_type": 1 00:12:57.864 }, 00:12:57.864 { 00:12:57.864 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.864 "dma_device_type": 2 00:12:57.864 }, 00:12:57.864 { 00:12:57.865 "dma_device_id": "system", 00:12:57.865 "dma_device_type": 1 00:12:57.865 }, 00:12:57.865 { 00:12:57.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.865 "dma_device_type": 2 00:12:57.865 }, 00:12:57.865 { 00:12:57.865 "dma_device_id": "system", 00:12:57.865 "dma_device_type": 1 00:12:57.865 }, 00:12:57.865 { 00:12:57.865 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:57.865 "dma_device_type": 2 00:12:57.865 } 00:12:57.865 ], 00:12:57.865 "driver_specific": { 00:12:57.865 "raid": { 00:12:57.865 "uuid": "ba5ede74-4225-11ef-aa83-81fbc7dfef58", 00:12:57.865 "strip_size_kb": 64, 00:12:57.865 "state": "online", 00:12:57.865 "raid_level": "raid0", 00:12:57.865 "superblock": false, 00:12:57.865 "num_base_bdevs": 4, 00:12:57.865 "num_base_bdevs_discovered": 4, 00:12:57.865 "num_base_bdevs_operational": 4, 00:12:57.865 "base_bdevs_list": [ 00:12:57.865 { 00:12:57.865 "name": "BaseBdev1", 00:12:57.865 "uuid": "b771ee85-4225-11ef-aa83-81fbc7dfef58", 00:12:57.865 "is_configured": true, 00:12:57.865 "data_offset": 0, 00:12:57.865 "data_size": 65536 00:12:57.865 }, 00:12:57.865 { 00:12:57.865 "name": "BaseBdev2", 00:12:57.865 "uuid": "b8db46b7-4225-11ef-aa83-81fbc7dfef58", 00:12:57.865 "is_configured": true, 00:12:57.865 "data_offset": 0, 00:12:57.865 "data_size": 65536 00:12:57.865 }, 00:12:57.865 { 00:12:57.865 "name": "BaseBdev3", 00:12:57.865 "uuid": "b99f3368-4225-11ef-aa83-81fbc7dfef58", 00:12:57.865 "is_configured": true, 00:12:57.865 "data_offset": 0, 00:12:57.865 "data_size": 65536 00:12:57.865 }, 00:12:57.865 { 00:12:57.865 "name": "BaseBdev4", 00:12:57.865 "uuid": "ba5eda99-4225-11ef-aa83-81fbc7dfef58", 00:12:57.865 "is_configured": true, 00:12:57.865 "data_offset": 0, 00:12:57.865 "data_size": 65536 00:12:57.865 } 00:12:57.865 ] 00:12:57.865 } 00:12:57.865 } 00:12:57.865 }' 00:12:57.865 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:12:57.865 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:12:57.865 BaseBdev2 00:12:57.865 BaseBdev3 00:12:57.865 BaseBdev4' 00:12:57.865 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:57.865 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:12:57.865 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.123 "name": "BaseBdev1", 00:12:58.123 "aliases": [ 00:12:58.123 "b771ee85-4225-11ef-aa83-81fbc7dfef58" 00:12:58.123 ], 00:12:58.123 "product_name": "Malloc disk", 00:12:58.123 "block_size": 512, 00:12:58.123 "num_blocks": 65536, 00:12:58.123 "uuid": "b771ee85-4225-11ef-aa83-81fbc7dfef58", 00:12:58.123 "assigned_rate_limits": { 00:12:58.123 "rw_ios_per_sec": 0, 00:12:58.123 "rw_mbytes_per_sec": 0, 00:12:58.123 "r_mbytes_per_sec": 0, 00:12:58.123 "w_mbytes_per_sec": 0 00:12:58.123 }, 00:12:58.123 "claimed": true, 00:12:58.123 "claim_type": "exclusive_write", 00:12:58.123 "zoned": false, 00:12:58.123 "supported_io_types": { 00:12:58.123 "read": true, 00:12:58.123 "write": true, 00:12:58.123 "unmap": true, 00:12:58.123 "flush": true, 00:12:58.123 "reset": true, 00:12:58.123 "nvme_admin": false, 00:12:58.123 "nvme_io": false, 00:12:58.123 "nvme_io_md": false, 00:12:58.123 "write_zeroes": true, 00:12:58.123 "zcopy": true, 00:12:58.123 "get_zone_info": false, 00:12:58.123 "zone_management": false, 00:12:58.123 "zone_append": false, 00:12:58.123 "compare": false, 00:12:58.123 "compare_and_write": false, 00:12:58.123 "abort": true, 00:12:58.123 "seek_hole": false, 00:12:58.123 "seek_data": false, 00:12:58.123 "copy": true, 00:12:58.123 "nvme_iov_md": false 00:12:58.123 }, 00:12:58.123 "memory_domains": [ 00:12:58.123 { 00:12:58.123 "dma_device_id": "system", 00:12:58.123 "dma_device_type": 1 00:12:58.123 }, 00:12:58.123 { 00:12:58.123 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.123 "dma_device_type": 2 00:12:58.123 } 00:12:58.123 ], 00:12:58.123 "driver_specific": {} 00:12:58.123 }' 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:12:58.123 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.382 "name": "BaseBdev2", 00:12:58.382 "aliases": [ 00:12:58.382 "b8db46b7-4225-11ef-aa83-81fbc7dfef58" 00:12:58.382 ], 00:12:58.382 "product_name": "Malloc disk", 00:12:58.382 "block_size": 512, 00:12:58.382 "num_blocks": 65536, 00:12:58.382 "uuid": "b8db46b7-4225-11ef-aa83-81fbc7dfef58", 00:12:58.382 "assigned_rate_limits": { 00:12:58.382 "rw_ios_per_sec": 0, 00:12:58.382 "rw_mbytes_per_sec": 0, 00:12:58.382 "r_mbytes_per_sec": 0, 00:12:58.382 "w_mbytes_per_sec": 0 00:12:58.382 }, 00:12:58.382 "claimed": true, 00:12:58.382 "claim_type": "exclusive_write", 00:12:58.382 "zoned": false, 00:12:58.382 "supported_io_types": { 00:12:58.382 "read": true, 00:12:58.382 "write": true, 00:12:58.382 "unmap": true, 00:12:58.382 "flush": true, 00:12:58.382 "reset": true, 00:12:58.382 "nvme_admin": false, 00:12:58.382 "nvme_io": false, 00:12:58.382 "nvme_io_md": false, 00:12:58.382 "write_zeroes": true, 00:12:58.382 "zcopy": true, 00:12:58.382 "get_zone_info": false, 00:12:58.382 "zone_management": false, 00:12:58.382 "zone_append": false, 00:12:58.382 "compare": false, 00:12:58.382 "compare_and_write": false, 00:12:58.382 "abort": true, 00:12:58.382 "seek_hole": false, 00:12:58.382 "seek_data": false, 00:12:58.382 "copy": true, 00:12:58.382 "nvme_iov_md": false 00:12:58.382 }, 00:12:58.382 "memory_domains": [ 00:12:58.382 { 00:12:58.382 "dma_device_id": "system", 00:12:58.382 "dma_device_type": 1 00:12:58.382 }, 00:12:58.382 { 00:12:58.382 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.382 "dma_device_type": 2 00:12:58.382 } 00:12:58.382 ], 00:12:58.382 "driver_specific": {} 00:12:58.382 }' 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:12:58.382 21:12:09 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.640 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.640 "name": "BaseBdev3", 00:12:58.640 "aliases": [ 00:12:58.640 "b99f3368-4225-11ef-aa83-81fbc7dfef58" 00:12:58.640 ], 00:12:58.640 "product_name": "Malloc disk", 00:12:58.640 "block_size": 512, 00:12:58.640 "num_blocks": 65536, 00:12:58.640 "uuid": "b99f3368-4225-11ef-aa83-81fbc7dfef58", 00:12:58.640 "assigned_rate_limits": { 00:12:58.640 "rw_ios_per_sec": 0, 00:12:58.640 "rw_mbytes_per_sec": 0, 00:12:58.640 "r_mbytes_per_sec": 0, 00:12:58.640 "w_mbytes_per_sec": 0 00:12:58.640 }, 00:12:58.640 "claimed": true, 00:12:58.640 "claim_type": "exclusive_write", 00:12:58.640 "zoned": false, 00:12:58.640 "supported_io_types": { 00:12:58.640 "read": true, 00:12:58.640 "write": true, 00:12:58.640 "unmap": true, 00:12:58.640 "flush": true, 00:12:58.640 "reset": true, 00:12:58.640 "nvme_admin": false, 00:12:58.640 "nvme_io": false, 00:12:58.640 "nvme_io_md": false, 00:12:58.640 "write_zeroes": true, 00:12:58.640 "zcopy": true, 00:12:58.640 "get_zone_info": false, 00:12:58.640 "zone_management": false, 00:12:58.640 "zone_append": false, 00:12:58.640 "compare": false, 00:12:58.640 "compare_and_write": false, 00:12:58.640 "abort": true, 00:12:58.640 "seek_hole": false, 00:12:58.640 "seek_data": false, 00:12:58.640 "copy": true, 00:12:58.640 "nvme_iov_md": false 00:12:58.640 }, 00:12:58.640 "memory_domains": [ 00:12:58.640 { 00:12:58.640 "dma_device_id": "system", 00:12:58.640 "dma_device_type": 1 00:12:58.640 }, 00:12:58.640 { 00:12:58.640 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.640 "dma_device_type": 2 00:12:58.640 } 00:12:58.640 ], 00:12:58.640 "driver_specific": {} 00:12:58.640 }' 00:12:58.641 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.641 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.641 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.641 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.641 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.641 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.641 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.641 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.641 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.641 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.641 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.641 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.641 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:12:58.641 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:12:58.641 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:12:58.899 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:12:58.899 "name": "BaseBdev4", 00:12:58.899 "aliases": [ 00:12:58.899 "ba5eda99-4225-11ef-aa83-81fbc7dfef58" 00:12:58.899 ], 00:12:58.899 "product_name": "Malloc disk", 00:12:58.899 "block_size": 512, 00:12:58.899 "num_blocks": 65536, 00:12:58.899 "uuid": "ba5eda99-4225-11ef-aa83-81fbc7dfef58", 00:12:58.899 "assigned_rate_limits": { 00:12:58.899 "rw_ios_per_sec": 0, 00:12:58.899 "rw_mbytes_per_sec": 0, 00:12:58.899 "r_mbytes_per_sec": 0, 00:12:58.899 "w_mbytes_per_sec": 0 00:12:58.899 }, 00:12:58.899 "claimed": true, 00:12:58.899 "claim_type": "exclusive_write", 00:12:58.899 "zoned": false, 00:12:58.899 "supported_io_types": { 00:12:58.899 "read": true, 00:12:58.899 "write": true, 00:12:58.899 "unmap": true, 00:12:58.899 "flush": true, 00:12:58.899 "reset": true, 00:12:58.899 "nvme_admin": false, 00:12:58.899 "nvme_io": false, 00:12:58.899 "nvme_io_md": false, 00:12:58.899 "write_zeroes": true, 00:12:58.899 "zcopy": true, 00:12:58.899 "get_zone_info": false, 00:12:58.899 "zone_management": false, 00:12:58.899 "zone_append": false, 00:12:58.899 "compare": false, 00:12:58.899 "compare_and_write": false, 00:12:58.899 "abort": true, 00:12:58.899 "seek_hole": false, 00:12:58.899 "seek_data": false, 00:12:58.899 "copy": true, 00:12:58.899 "nvme_iov_md": false 00:12:58.899 }, 00:12:58.899 "memory_domains": [ 00:12:58.899 { 00:12:58.899 "dma_device_id": "system", 00:12:58.899 "dma_device_type": 1 00:12:58.899 }, 00:12:58.899 { 00:12:58.899 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:12:58.899 "dma_device_type": 2 00:12:58.899 } 00:12:58.899 ], 00:12:58.899 "driver_specific": {} 00:12:58.899 }' 00:12:58.899 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.899 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:12:58.899 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:12:58.899 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.899 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:12:58.899 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:12:58.899 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.899 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:12:58.899 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:12:58.899 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.899 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:12:58.899 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:12:58.899 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:12:59.158 [2024-07-14 21:12:10.617675] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:12:59.158 [2024-07-14 21:12:10.617696] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:12:59.158 [2024-07-14 21:12:10.617731] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.158 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:12:59.416 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:12:59.416 "name": "Existed_Raid", 00:12:59.416 "uuid": "ba5ede74-4225-11ef-aa83-81fbc7dfef58", 00:12:59.416 "strip_size_kb": 64, 00:12:59.416 "state": "offline", 00:12:59.416 "raid_level": "raid0", 00:12:59.416 "superblock": false, 00:12:59.416 "num_base_bdevs": 4, 00:12:59.416 "num_base_bdevs_discovered": 3, 00:12:59.416 "num_base_bdevs_operational": 3, 00:12:59.416 "base_bdevs_list": [ 00:12:59.416 { 00:12:59.416 "name": null, 00:12:59.416 "uuid": "00000000-0000-0000-0000-000000000000", 00:12:59.416 "is_configured": false, 00:12:59.416 "data_offset": 0, 00:12:59.416 "data_size": 65536 00:12:59.416 }, 00:12:59.416 { 00:12:59.416 "name": "BaseBdev2", 00:12:59.416 "uuid": "b8db46b7-4225-11ef-aa83-81fbc7dfef58", 00:12:59.416 "is_configured": true, 00:12:59.416 "data_offset": 0, 00:12:59.416 "data_size": 65536 00:12:59.416 }, 00:12:59.416 { 00:12:59.416 "name": "BaseBdev3", 00:12:59.416 "uuid": "b99f3368-4225-11ef-aa83-81fbc7dfef58", 00:12:59.416 "is_configured": true, 00:12:59.416 "data_offset": 0, 00:12:59.416 "data_size": 65536 00:12:59.416 }, 00:12:59.416 { 00:12:59.416 "name": "BaseBdev4", 00:12:59.416 "uuid": "ba5eda99-4225-11ef-aa83-81fbc7dfef58", 00:12:59.416 "is_configured": true, 00:12:59.416 "data_offset": 0, 00:12:59.416 "data_size": 65536 00:12:59.416 } 00:12:59.416 ] 00:12:59.416 }' 00:12:59.416 21:12:10 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:12:59.416 21:12:10 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:12:59.674 21:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:12:59.674 21:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:12:59.674 21:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:12:59.674 21:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:12:59.932 21:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:12:59.932 21:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:12:59.932 21:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:00.191 [2024-07-14 21:12:11.566107] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:00.191 21:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:00.191 21:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:00.191 21:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.191 21:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:00.449 21:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:00.449 21:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:00.449 21:12:11 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:00.708 [2024-07-14 21:12:12.042686] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:00.708 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:00.708 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:00.708 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:00.708 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:00.966 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:00.966 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:00.966 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:01.224 [2024-07-14 21:12:12.571349] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:01.224 [2024-07-14 21:12:12.571370] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x209dcfc34a00 name Existed_Raid, state offline 00:13:01.224 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:01.224 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:01.224 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:01.224 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:01.482 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:01.482 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:01.482 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:13:01.482 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:01.482 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:01.483 21:12:12 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:01.740 BaseBdev2 00:13:01.740 21:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:01.740 21:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:01.740 21:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:01.740 21:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:01.740 21:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:01.740 21:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:01.740 21:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:01.998 21:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:01.998 [ 00:13:01.998 { 00:13:01.998 "name": "BaseBdev2", 00:13:01.998 "aliases": [ 00:13:01.998 "bd617ba2-4225-11ef-aa83-81fbc7dfef58" 00:13:01.998 ], 00:13:01.998 "product_name": "Malloc disk", 00:13:01.998 "block_size": 512, 00:13:01.998 "num_blocks": 65536, 00:13:01.998 "uuid": "bd617ba2-4225-11ef-aa83-81fbc7dfef58", 00:13:01.998 "assigned_rate_limits": { 00:13:01.998 "rw_ios_per_sec": 0, 00:13:01.998 "rw_mbytes_per_sec": 0, 00:13:01.998 "r_mbytes_per_sec": 0, 00:13:01.998 "w_mbytes_per_sec": 0 00:13:01.998 }, 00:13:01.998 "claimed": false, 00:13:01.998 "zoned": false, 00:13:01.998 "supported_io_types": { 00:13:01.998 "read": true, 00:13:01.998 "write": true, 00:13:01.998 "unmap": true, 00:13:01.998 "flush": true, 00:13:01.998 "reset": true, 00:13:01.998 "nvme_admin": false, 00:13:01.998 "nvme_io": false, 00:13:01.998 "nvme_io_md": false, 00:13:01.998 "write_zeroes": true, 00:13:01.998 "zcopy": true, 00:13:01.998 "get_zone_info": false, 00:13:01.998 "zone_management": false, 00:13:01.998 "zone_append": false, 00:13:01.998 "compare": false, 00:13:01.998 "compare_and_write": false, 00:13:01.998 "abort": true, 00:13:01.998 "seek_hole": false, 00:13:01.998 "seek_data": false, 00:13:01.998 "copy": true, 00:13:01.998 "nvme_iov_md": false 00:13:01.998 }, 00:13:01.998 "memory_domains": [ 00:13:01.998 { 00:13:01.998 "dma_device_id": "system", 00:13:01.998 "dma_device_type": 1 00:13:01.998 }, 00:13:01.998 { 00:13:01.998 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:01.998 "dma_device_type": 2 00:13:01.998 } 00:13:01.998 ], 00:13:01.998 "driver_specific": {} 00:13:01.998 } 00:13:01.998 ] 00:13:01.998 21:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:01.998 21:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:01.998 21:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:01.998 21:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:02.255 BaseBdev3 00:13:02.513 21:12:13 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:02.513 21:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:02.513 21:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:02.513 21:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:02.513 21:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:02.513 21:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:02.513 21:12:13 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:02.513 21:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:02.788 [ 00:13:02.788 { 00:13:02.788 "name": "BaseBdev3", 00:13:02.788 "aliases": [ 00:13:02.788 "bdd43b28-4225-11ef-aa83-81fbc7dfef58" 00:13:02.788 ], 00:13:02.788 "product_name": "Malloc disk", 00:13:02.788 "block_size": 512, 00:13:02.788 "num_blocks": 65536, 00:13:02.788 "uuid": "bdd43b28-4225-11ef-aa83-81fbc7dfef58", 00:13:02.788 "assigned_rate_limits": { 00:13:02.788 "rw_ios_per_sec": 0, 00:13:02.788 "rw_mbytes_per_sec": 0, 00:13:02.788 "r_mbytes_per_sec": 0, 00:13:02.788 "w_mbytes_per_sec": 0 00:13:02.788 }, 00:13:02.788 "claimed": false, 00:13:02.788 "zoned": false, 00:13:02.788 "supported_io_types": { 00:13:02.788 "read": true, 00:13:02.788 "write": true, 00:13:02.788 "unmap": true, 00:13:02.788 "flush": true, 00:13:02.788 "reset": true, 00:13:02.788 "nvme_admin": false, 00:13:02.788 "nvme_io": false, 00:13:02.788 "nvme_io_md": false, 00:13:02.788 "write_zeroes": true, 00:13:02.788 "zcopy": true, 00:13:02.788 "get_zone_info": false, 00:13:02.788 "zone_management": false, 00:13:02.788 "zone_append": false, 00:13:02.788 "compare": false, 00:13:02.788 "compare_and_write": false, 00:13:02.788 "abort": true, 00:13:02.788 "seek_hole": false, 00:13:02.788 "seek_data": false, 00:13:02.788 "copy": true, 00:13:02.788 "nvme_iov_md": false 00:13:02.788 }, 00:13:02.788 "memory_domains": [ 00:13:02.788 { 00:13:02.788 "dma_device_id": "system", 00:13:02.788 "dma_device_type": 1 00:13:02.788 }, 00:13:02.788 { 00:13:02.788 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:02.788 "dma_device_type": 2 00:13:02.788 } 00:13:02.788 ], 00:13:02.788 "driver_specific": {} 00:13:02.788 } 00:13:02.788 ] 00:13:02.788 21:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:02.788 21:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:02.788 21:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:02.788 21:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:03.056 BaseBdev4 00:13:03.056 21:12:14 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:13:03.056 21:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:03.056 21:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:03.056 21:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:03.056 21:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:03.056 21:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:03.056 21:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:03.314 21:12:14 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:03.571 [ 00:13:03.571 { 00:13:03.571 "name": "BaseBdev4", 00:13:03.571 "aliases": [ 00:13:03.571 "be404430-4225-11ef-aa83-81fbc7dfef58" 00:13:03.571 ], 00:13:03.571 "product_name": "Malloc disk", 00:13:03.571 "block_size": 512, 00:13:03.571 "num_blocks": 65536, 00:13:03.571 "uuid": "be404430-4225-11ef-aa83-81fbc7dfef58", 00:13:03.571 "assigned_rate_limits": { 00:13:03.571 "rw_ios_per_sec": 0, 00:13:03.571 "rw_mbytes_per_sec": 0, 00:13:03.571 "r_mbytes_per_sec": 0, 00:13:03.571 "w_mbytes_per_sec": 0 00:13:03.571 }, 00:13:03.571 "claimed": false, 00:13:03.571 "zoned": false, 00:13:03.571 "supported_io_types": { 00:13:03.571 "read": true, 00:13:03.571 "write": true, 00:13:03.571 "unmap": true, 00:13:03.571 "flush": true, 00:13:03.571 "reset": true, 00:13:03.571 "nvme_admin": false, 00:13:03.571 "nvme_io": false, 00:13:03.571 "nvme_io_md": false, 00:13:03.571 "write_zeroes": true, 00:13:03.571 "zcopy": true, 00:13:03.571 "get_zone_info": false, 00:13:03.571 "zone_management": false, 00:13:03.571 "zone_append": false, 00:13:03.571 "compare": false, 00:13:03.571 "compare_and_write": false, 00:13:03.571 "abort": true, 00:13:03.571 "seek_hole": false, 00:13:03.571 "seek_data": false, 00:13:03.571 "copy": true, 00:13:03.571 "nvme_iov_md": false 00:13:03.571 }, 00:13:03.571 "memory_domains": [ 00:13:03.571 { 00:13:03.571 "dma_device_id": "system", 00:13:03.571 "dma_device_type": 1 00:13:03.571 }, 00:13:03.571 { 00:13:03.571 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:03.571 "dma_device_type": 2 00:13:03.571 } 00:13:03.571 ], 00:13:03.571 "driver_specific": {} 00:13:03.571 } 00:13:03.571 ] 00:13:03.571 21:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:03.571 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:03.571 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:03.571 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:03.830 [2024-07-14 21:12:15.240088] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:03.830 [2024-07-14 21:12:15.240148] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:03.830 [2024-07-14 21:12:15.240154] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:03.830 [2024-07-14 21:12:15.240525] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:03.830 [2024-07-14 21:12:15.240543] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:03.830 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:03.830 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:03.830 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:03.830 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:03.830 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:03.830 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:03.830 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:03.830 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:03.830 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:03.830 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:03.830 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:03.830 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.089 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:04.089 "name": "Existed_Raid", 00:13:04.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.089 "strip_size_kb": 64, 00:13:04.089 "state": "configuring", 00:13:04.089 "raid_level": "raid0", 00:13:04.089 "superblock": false, 00:13:04.089 "num_base_bdevs": 4, 00:13:04.089 "num_base_bdevs_discovered": 3, 00:13:04.089 "num_base_bdevs_operational": 4, 00:13:04.089 "base_bdevs_list": [ 00:13:04.089 { 00:13:04.089 "name": "BaseBdev1", 00:13:04.089 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.089 "is_configured": false, 00:13:04.089 "data_offset": 0, 00:13:04.089 "data_size": 0 00:13:04.089 }, 00:13:04.089 { 00:13:04.089 "name": "BaseBdev2", 00:13:04.089 "uuid": "bd617ba2-4225-11ef-aa83-81fbc7dfef58", 00:13:04.089 "is_configured": true, 00:13:04.089 "data_offset": 0, 00:13:04.089 "data_size": 65536 00:13:04.089 }, 00:13:04.089 { 00:13:04.089 "name": "BaseBdev3", 00:13:04.089 "uuid": "bdd43b28-4225-11ef-aa83-81fbc7dfef58", 00:13:04.089 "is_configured": true, 00:13:04.089 "data_offset": 0, 00:13:04.089 "data_size": 65536 00:13:04.089 }, 00:13:04.089 { 00:13:04.089 "name": "BaseBdev4", 00:13:04.089 "uuid": "be404430-4225-11ef-aa83-81fbc7dfef58", 00:13:04.089 "is_configured": true, 00:13:04.089 "data_offset": 0, 00:13:04.089 "data_size": 65536 00:13:04.089 } 00:13:04.089 ] 00:13:04.089 }' 00:13:04.089 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:04.089 21:12:15 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:04.348 21:12:15 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:04.605 [2024-07-14 21:12:16.040089] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:04.606 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:04.606 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:04.606 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:04.606 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:04.606 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:04.606 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:04.606 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:04.606 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:04.606 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:04.606 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:04.606 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:04.606 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:04.862 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:04.862 "name": "Existed_Raid", 00:13:04.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.862 "strip_size_kb": 64, 00:13:04.862 "state": "configuring", 00:13:04.862 "raid_level": "raid0", 00:13:04.862 "superblock": false, 00:13:04.862 "num_base_bdevs": 4, 00:13:04.862 "num_base_bdevs_discovered": 2, 00:13:04.862 "num_base_bdevs_operational": 4, 00:13:04.862 "base_bdevs_list": [ 00:13:04.862 { 00:13:04.862 "name": "BaseBdev1", 00:13:04.862 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:04.862 "is_configured": false, 00:13:04.862 "data_offset": 0, 00:13:04.862 "data_size": 0 00:13:04.862 }, 00:13:04.862 { 00:13:04.862 "name": null, 00:13:04.862 "uuid": "bd617ba2-4225-11ef-aa83-81fbc7dfef58", 00:13:04.862 "is_configured": false, 00:13:04.862 "data_offset": 0, 00:13:04.862 "data_size": 65536 00:13:04.862 }, 00:13:04.862 { 00:13:04.862 "name": "BaseBdev3", 00:13:04.862 "uuid": "bdd43b28-4225-11ef-aa83-81fbc7dfef58", 00:13:04.862 "is_configured": true, 00:13:04.862 "data_offset": 0, 00:13:04.862 "data_size": 65536 00:13:04.862 }, 00:13:04.862 { 00:13:04.862 "name": "BaseBdev4", 00:13:04.862 "uuid": "be404430-4225-11ef-aa83-81fbc7dfef58", 00:13:04.862 "is_configured": true, 00:13:04.862 "data_offset": 0, 00:13:04.862 "data_size": 65536 00:13:04.862 } 00:13:04.862 ] 00:13:04.862 }' 00:13:04.862 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:04.862 21:12:16 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:05.120 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:05.120 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:05.377 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:05.377 21:12:16 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:05.636 [2024-07-14 21:12:17.032165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:05.636 BaseBdev1 00:13:05.636 21:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:05.636 21:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:05.636 21:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:05.636 21:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:05.636 21:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:05.636 21:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:05.636 21:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:05.894 21:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:06.153 [ 00:13:06.153 { 00:13:06.153 "name": "BaseBdev1", 00:13:06.153 "aliases": [ 00:13:06.153 "bfc47487-4225-11ef-aa83-81fbc7dfef58" 00:13:06.153 ], 00:13:06.153 "product_name": "Malloc disk", 00:13:06.153 "block_size": 512, 00:13:06.153 "num_blocks": 65536, 00:13:06.153 "uuid": "bfc47487-4225-11ef-aa83-81fbc7dfef58", 00:13:06.153 "assigned_rate_limits": { 00:13:06.153 "rw_ios_per_sec": 0, 00:13:06.153 "rw_mbytes_per_sec": 0, 00:13:06.153 "r_mbytes_per_sec": 0, 00:13:06.153 "w_mbytes_per_sec": 0 00:13:06.153 }, 00:13:06.153 "claimed": true, 00:13:06.153 "claim_type": "exclusive_write", 00:13:06.153 "zoned": false, 00:13:06.153 "supported_io_types": { 00:13:06.153 "read": true, 00:13:06.153 "write": true, 00:13:06.153 "unmap": true, 00:13:06.153 "flush": true, 00:13:06.153 "reset": true, 00:13:06.153 "nvme_admin": false, 00:13:06.153 "nvme_io": false, 00:13:06.153 "nvme_io_md": false, 00:13:06.153 "write_zeroes": true, 00:13:06.153 "zcopy": true, 00:13:06.153 "get_zone_info": false, 00:13:06.153 "zone_management": false, 00:13:06.153 "zone_append": false, 00:13:06.153 "compare": false, 00:13:06.153 "compare_and_write": false, 00:13:06.153 "abort": true, 00:13:06.153 "seek_hole": false, 00:13:06.153 "seek_data": false, 00:13:06.153 "copy": true, 00:13:06.153 "nvme_iov_md": false 00:13:06.153 }, 00:13:06.153 "memory_domains": [ 00:13:06.153 { 00:13:06.153 "dma_device_id": "system", 00:13:06.153 "dma_device_type": 1 00:13:06.153 }, 00:13:06.153 { 00:13:06.153 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:06.153 "dma_device_type": 2 00:13:06.153 } 00:13:06.153 ], 00:13:06.153 "driver_specific": {} 00:13:06.153 } 00:13:06.153 ] 00:13:06.153 21:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:06.153 21:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:06.153 21:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:06.153 21:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:06.153 21:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:06.153 21:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:06.153 21:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:06.153 21:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:06.153 21:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:06.153 21:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:06.153 21:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:06.153 21:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.153 21:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:06.411 21:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:06.411 "name": "Existed_Raid", 00:13:06.411 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:06.411 "strip_size_kb": 64, 00:13:06.411 "state": "configuring", 00:13:06.411 "raid_level": "raid0", 00:13:06.411 "superblock": false, 00:13:06.411 "num_base_bdevs": 4, 00:13:06.411 "num_base_bdevs_discovered": 3, 00:13:06.411 "num_base_bdevs_operational": 4, 00:13:06.411 "base_bdevs_list": [ 00:13:06.411 { 00:13:06.411 "name": "BaseBdev1", 00:13:06.411 "uuid": "bfc47487-4225-11ef-aa83-81fbc7dfef58", 00:13:06.411 "is_configured": true, 00:13:06.411 "data_offset": 0, 00:13:06.411 "data_size": 65536 00:13:06.411 }, 00:13:06.411 { 00:13:06.411 "name": null, 00:13:06.411 "uuid": "bd617ba2-4225-11ef-aa83-81fbc7dfef58", 00:13:06.411 "is_configured": false, 00:13:06.411 "data_offset": 0, 00:13:06.411 "data_size": 65536 00:13:06.411 }, 00:13:06.411 { 00:13:06.411 "name": "BaseBdev3", 00:13:06.411 "uuid": "bdd43b28-4225-11ef-aa83-81fbc7dfef58", 00:13:06.411 "is_configured": true, 00:13:06.411 "data_offset": 0, 00:13:06.411 "data_size": 65536 00:13:06.411 }, 00:13:06.411 { 00:13:06.411 "name": "BaseBdev4", 00:13:06.411 "uuid": "be404430-4225-11ef-aa83-81fbc7dfef58", 00:13:06.411 "is_configured": true, 00:13:06.411 "data_offset": 0, 00:13:06.411 "data_size": 65536 00:13:06.411 } 00:13:06.411 ] 00:13:06.411 }' 00:13:06.411 21:12:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:06.411 21:12:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:06.669 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.669 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:06.927 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:06.927 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:06.927 [2024-07-14 21:12:18.416130] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:06.927 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:06.927 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:06.927 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:06.927 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:06.927 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:06.927 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:06.927 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:06.927 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:06.927 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:06.927 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:06.927 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:06.927 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:07.185 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:07.185 "name": "Existed_Raid", 00:13:07.185 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:07.185 "strip_size_kb": 64, 00:13:07.185 "state": "configuring", 00:13:07.185 "raid_level": "raid0", 00:13:07.185 "superblock": false, 00:13:07.185 "num_base_bdevs": 4, 00:13:07.185 "num_base_bdevs_discovered": 2, 00:13:07.185 "num_base_bdevs_operational": 4, 00:13:07.185 "base_bdevs_list": [ 00:13:07.185 { 00:13:07.185 "name": "BaseBdev1", 00:13:07.185 "uuid": "bfc47487-4225-11ef-aa83-81fbc7dfef58", 00:13:07.185 "is_configured": true, 00:13:07.185 "data_offset": 0, 00:13:07.185 "data_size": 65536 00:13:07.185 }, 00:13:07.185 { 00:13:07.185 "name": null, 00:13:07.185 "uuid": "bd617ba2-4225-11ef-aa83-81fbc7dfef58", 00:13:07.185 "is_configured": false, 00:13:07.185 "data_offset": 0, 00:13:07.185 "data_size": 65536 00:13:07.185 }, 00:13:07.185 { 00:13:07.185 "name": null, 00:13:07.185 "uuid": "bdd43b28-4225-11ef-aa83-81fbc7dfef58", 00:13:07.185 "is_configured": false, 00:13:07.185 "data_offset": 0, 00:13:07.185 "data_size": 65536 00:13:07.185 }, 00:13:07.185 { 00:13:07.185 "name": "BaseBdev4", 00:13:07.185 "uuid": "be404430-4225-11ef-aa83-81fbc7dfef58", 00:13:07.185 "is_configured": true, 00:13:07.185 "data_offset": 0, 00:13:07.185 "data_size": 65536 00:13:07.185 } 00:13:07.185 ] 00:13:07.185 }' 00:13:07.185 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:07.185 21:12:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:07.442 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.443 21:12:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:07.700 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:07.700 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:07.958 [2024-07-14 21:12:19.408143] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:07.958 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:07.958 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:07.958 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:07.958 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:07.958 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:07.958 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:07.958 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:07.958 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:07.958 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:07.958 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:07.958 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:07.958 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.216 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:08.216 "name": "Existed_Raid", 00:13:08.216 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:08.216 "strip_size_kb": 64, 00:13:08.216 "state": "configuring", 00:13:08.216 "raid_level": "raid0", 00:13:08.216 "superblock": false, 00:13:08.216 "num_base_bdevs": 4, 00:13:08.216 "num_base_bdevs_discovered": 3, 00:13:08.216 "num_base_bdevs_operational": 4, 00:13:08.216 "base_bdevs_list": [ 00:13:08.216 { 00:13:08.216 "name": "BaseBdev1", 00:13:08.216 "uuid": "bfc47487-4225-11ef-aa83-81fbc7dfef58", 00:13:08.216 "is_configured": true, 00:13:08.216 "data_offset": 0, 00:13:08.216 "data_size": 65536 00:13:08.216 }, 00:13:08.216 { 00:13:08.216 "name": null, 00:13:08.216 "uuid": "bd617ba2-4225-11ef-aa83-81fbc7dfef58", 00:13:08.216 "is_configured": false, 00:13:08.216 "data_offset": 0, 00:13:08.216 "data_size": 65536 00:13:08.216 }, 00:13:08.216 { 00:13:08.216 "name": "BaseBdev3", 00:13:08.216 "uuid": "bdd43b28-4225-11ef-aa83-81fbc7dfef58", 00:13:08.216 "is_configured": true, 00:13:08.216 "data_offset": 0, 00:13:08.216 "data_size": 65536 00:13:08.216 }, 00:13:08.216 { 00:13:08.216 "name": "BaseBdev4", 00:13:08.216 "uuid": "be404430-4225-11ef-aa83-81fbc7dfef58", 00:13:08.216 "is_configured": true, 00:13:08.216 "data_offset": 0, 00:13:08.216 "data_size": 65536 00:13:08.216 } 00:13:08.216 ] 00:13:08.216 }' 00:13:08.216 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:08.216 21:12:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:08.474 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:08.474 21:12:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:08.732 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:08.732 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:08.990 [2024-07-14 21:12:20.368169] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:08.990 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:08.990 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:08.990 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:08.990 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:08.990 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:08.990 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:08.990 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:08.990 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:08.990 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:08.990 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:08.990 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:08.990 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.249 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:09.249 "name": "Existed_Raid", 00:13:09.249 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:09.249 "strip_size_kb": 64, 00:13:09.249 "state": "configuring", 00:13:09.249 "raid_level": "raid0", 00:13:09.249 "superblock": false, 00:13:09.249 "num_base_bdevs": 4, 00:13:09.249 "num_base_bdevs_discovered": 2, 00:13:09.249 "num_base_bdevs_operational": 4, 00:13:09.249 "base_bdevs_list": [ 00:13:09.249 { 00:13:09.249 "name": null, 00:13:09.249 "uuid": "bfc47487-4225-11ef-aa83-81fbc7dfef58", 00:13:09.249 "is_configured": false, 00:13:09.249 "data_offset": 0, 00:13:09.249 "data_size": 65536 00:13:09.249 }, 00:13:09.249 { 00:13:09.249 "name": null, 00:13:09.249 "uuid": "bd617ba2-4225-11ef-aa83-81fbc7dfef58", 00:13:09.249 "is_configured": false, 00:13:09.249 "data_offset": 0, 00:13:09.249 "data_size": 65536 00:13:09.249 }, 00:13:09.249 { 00:13:09.249 "name": "BaseBdev3", 00:13:09.249 "uuid": "bdd43b28-4225-11ef-aa83-81fbc7dfef58", 00:13:09.249 "is_configured": true, 00:13:09.249 "data_offset": 0, 00:13:09.249 "data_size": 65536 00:13:09.249 }, 00:13:09.249 { 00:13:09.249 "name": "BaseBdev4", 00:13:09.249 "uuid": "be404430-4225-11ef-aa83-81fbc7dfef58", 00:13:09.249 "is_configured": true, 00:13:09.249 "data_offset": 0, 00:13:09.249 "data_size": 65536 00:13:09.249 } 00:13:09.249 ] 00:13:09.249 }' 00:13:09.249 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:09.249 21:12:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:09.507 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:09.507 21:12:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:09.765 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:09.765 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:10.023 [2024-07-14 21:12:21.432492] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:10.023 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:10.023 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:10.023 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:10.023 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:10.023 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:10.023 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:10.023 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:10.023 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:10.023 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:10.023 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:10.023 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.023 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:10.281 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:10.281 "name": "Existed_Raid", 00:13:10.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:10.281 "strip_size_kb": 64, 00:13:10.281 "state": "configuring", 00:13:10.281 "raid_level": "raid0", 00:13:10.281 "superblock": false, 00:13:10.281 "num_base_bdevs": 4, 00:13:10.281 "num_base_bdevs_discovered": 3, 00:13:10.281 "num_base_bdevs_operational": 4, 00:13:10.281 "base_bdevs_list": [ 00:13:10.281 { 00:13:10.281 "name": null, 00:13:10.281 "uuid": "bfc47487-4225-11ef-aa83-81fbc7dfef58", 00:13:10.281 "is_configured": false, 00:13:10.281 "data_offset": 0, 00:13:10.281 "data_size": 65536 00:13:10.281 }, 00:13:10.281 { 00:13:10.281 "name": "BaseBdev2", 00:13:10.281 "uuid": "bd617ba2-4225-11ef-aa83-81fbc7dfef58", 00:13:10.281 "is_configured": true, 00:13:10.281 "data_offset": 0, 00:13:10.281 "data_size": 65536 00:13:10.281 }, 00:13:10.281 { 00:13:10.281 "name": "BaseBdev3", 00:13:10.281 "uuid": "bdd43b28-4225-11ef-aa83-81fbc7dfef58", 00:13:10.281 "is_configured": true, 00:13:10.281 "data_offset": 0, 00:13:10.281 "data_size": 65536 00:13:10.281 }, 00:13:10.281 { 00:13:10.281 "name": "BaseBdev4", 00:13:10.281 "uuid": "be404430-4225-11ef-aa83-81fbc7dfef58", 00:13:10.281 "is_configured": true, 00:13:10.281 "data_offset": 0, 00:13:10.281 "data_size": 65536 00:13:10.281 } 00:13:10.281 ] 00:13:10.281 }' 00:13:10.281 21:12:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:10.281 21:12:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:10.538 21:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.538 21:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:10.796 21:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:10.796 21:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:10.796 21:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:11.055 21:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u bfc47487-4225-11ef-aa83-81fbc7dfef58 00:13:11.312 [2024-07-14 21:12:22.720629] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:11.313 [2024-07-14 21:12:22.720645] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x209dcfc34f00 00:13:11.313 [2024-07-14 21:12:22.720649] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:13:11.313 [2024-07-14 21:12:22.720670] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x209dcfc97e20 00:13:11.313 [2024-07-14 21:12:22.720754] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x209dcfc34f00 00:13:11.313 [2024-07-14 21:12:22.720758] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x209dcfc34f00 00:13:11.313 [2024-07-14 21:12:22.720788] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:11.313 NewBaseBdev 00:13:11.313 21:12:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:11.313 21:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:11.313 21:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:11.313 21:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:13:11.313 21:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:11.313 21:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:11.313 21:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:11.572 21:12:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:11.833 [ 00:13:11.833 { 00:13:11.833 "name": "NewBaseBdev", 00:13:11.833 "aliases": [ 00:13:11.833 "bfc47487-4225-11ef-aa83-81fbc7dfef58" 00:13:11.833 ], 00:13:11.833 "product_name": "Malloc disk", 00:13:11.833 "block_size": 512, 00:13:11.833 "num_blocks": 65536, 00:13:11.833 "uuid": "bfc47487-4225-11ef-aa83-81fbc7dfef58", 00:13:11.833 "assigned_rate_limits": { 00:13:11.833 "rw_ios_per_sec": 0, 00:13:11.833 "rw_mbytes_per_sec": 0, 00:13:11.833 "r_mbytes_per_sec": 0, 00:13:11.833 "w_mbytes_per_sec": 0 00:13:11.833 }, 00:13:11.833 "claimed": true, 00:13:11.833 "claim_type": "exclusive_write", 00:13:11.833 "zoned": false, 00:13:11.833 "supported_io_types": { 00:13:11.833 "read": true, 00:13:11.833 "write": true, 00:13:11.833 "unmap": true, 00:13:11.833 "flush": true, 00:13:11.833 "reset": true, 00:13:11.833 "nvme_admin": false, 00:13:11.833 "nvme_io": false, 00:13:11.833 "nvme_io_md": false, 00:13:11.833 "write_zeroes": true, 00:13:11.833 "zcopy": true, 00:13:11.833 "get_zone_info": false, 00:13:11.833 "zone_management": false, 00:13:11.833 "zone_append": false, 00:13:11.833 "compare": false, 00:13:11.833 "compare_and_write": false, 00:13:11.833 "abort": true, 00:13:11.833 "seek_hole": false, 00:13:11.833 "seek_data": false, 00:13:11.833 "copy": true, 00:13:11.833 "nvme_iov_md": false 00:13:11.833 }, 00:13:11.833 "memory_domains": [ 00:13:11.833 { 00:13:11.833 "dma_device_id": "system", 00:13:11.833 "dma_device_type": 1 00:13:11.833 }, 00:13:11.833 { 00:13:11.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:11.833 "dma_device_type": 2 00:13:11.833 } 00:13:11.833 ], 00:13:11.833 "driver_specific": {} 00:13:11.833 } 00:13:11.833 ] 00:13:11.833 21:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:13:11.833 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:11.833 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:11.833 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:11.833 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:11.833 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:11.833 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:11.833 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:11.833 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:11.833 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:11.833 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:11.833 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:11.833 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:12.091 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:12.091 "name": "Existed_Raid", 00:13:12.091 "uuid": "c328759d-4225-11ef-aa83-81fbc7dfef58", 00:13:12.091 "strip_size_kb": 64, 00:13:12.091 "state": "online", 00:13:12.091 "raid_level": "raid0", 00:13:12.091 "superblock": false, 00:13:12.091 "num_base_bdevs": 4, 00:13:12.091 "num_base_bdevs_discovered": 4, 00:13:12.091 "num_base_bdevs_operational": 4, 00:13:12.091 "base_bdevs_list": [ 00:13:12.091 { 00:13:12.091 "name": "NewBaseBdev", 00:13:12.091 "uuid": "bfc47487-4225-11ef-aa83-81fbc7dfef58", 00:13:12.091 "is_configured": true, 00:13:12.091 "data_offset": 0, 00:13:12.091 "data_size": 65536 00:13:12.091 }, 00:13:12.091 { 00:13:12.091 "name": "BaseBdev2", 00:13:12.091 "uuid": "bd617ba2-4225-11ef-aa83-81fbc7dfef58", 00:13:12.091 "is_configured": true, 00:13:12.091 "data_offset": 0, 00:13:12.091 "data_size": 65536 00:13:12.091 }, 00:13:12.091 { 00:13:12.091 "name": "BaseBdev3", 00:13:12.091 "uuid": "bdd43b28-4225-11ef-aa83-81fbc7dfef58", 00:13:12.091 "is_configured": true, 00:13:12.091 "data_offset": 0, 00:13:12.091 "data_size": 65536 00:13:12.091 }, 00:13:12.091 { 00:13:12.091 "name": "BaseBdev4", 00:13:12.091 "uuid": "be404430-4225-11ef-aa83-81fbc7dfef58", 00:13:12.091 "is_configured": true, 00:13:12.091 "data_offset": 0, 00:13:12.091 "data_size": 65536 00:13:12.091 } 00:13:12.091 ] 00:13:12.091 }' 00:13:12.091 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:12.091 21:12:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:12.348 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:12.348 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:12.348 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:12.348 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:12.348 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:12.348 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:12.348 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:12.348 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:12.605 [2024-07-14 21:12:23.936550] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:12.605 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:12.605 "name": "Existed_Raid", 00:13:12.605 "aliases": [ 00:13:12.605 "c328759d-4225-11ef-aa83-81fbc7dfef58" 00:13:12.605 ], 00:13:12.605 "product_name": "Raid Volume", 00:13:12.605 "block_size": 512, 00:13:12.605 "num_blocks": 262144, 00:13:12.605 "uuid": "c328759d-4225-11ef-aa83-81fbc7dfef58", 00:13:12.605 "assigned_rate_limits": { 00:13:12.605 "rw_ios_per_sec": 0, 00:13:12.605 "rw_mbytes_per_sec": 0, 00:13:12.605 "r_mbytes_per_sec": 0, 00:13:12.605 "w_mbytes_per_sec": 0 00:13:12.605 }, 00:13:12.605 "claimed": false, 00:13:12.605 "zoned": false, 00:13:12.605 "supported_io_types": { 00:13:12.605 "read": true, 00:13:12.605 "write": true, 00:13:12.605 "unmap": true, 00:13:12.605 "flush": true, 00:13:12.605 "reset": true, 00:13:12.605 "nvme_admin": false, 00:13:12.605 "nvme_io": false, 00:13:12.605 "nvme_io_md": false, 00:13:12.605 "write_zeroes": true, 00:13:12.605 "zcopy": false, 00:13:12.605 "get_zone_info": false, 00:13:12.605 "zone_management": false, 00:13:12.605 "zone_append": false, 00:13:12.605 "compare": false, 00:13:12.605 "compare_and_write": false, 00:13:12.605 "abort": false, 00:13:12.605 "seek_hole": false, 00:13:12.605 "seek_data": false, 00:13:12.605 "copy": false, 00:13:12.605 "nvme_iov_md": false 00:13:12.605 }, 00:13:12.605 "memory_domains": [ 00:13:12.605 { 00:13:12.605 "dma_device_id": "system", 00:13:12.605 "dma_device_type": 1 00:13:12.605 }, 00:13:12.605 { 00:13:12.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.605 "dma_device_type": 2 00:13:12.605 }, 00:13:12.605 { 00:13:12.605 "dma_device_id": "system", 00:13:12.605 "dma_device_type": 1 00:13:12.605 }, 00:13:12.605 { 00:13:12.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.605 "dma_device_type": 2 00:13:12.605 }, 00:13:12.605 { 00:13:12.605 "dma_device_id": "system", 00:13:12.605 "dma_device_type": 1 00:13:12.605 }, 00:13:12.605 { 00:13:12.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.605 "dma_device_type": 2 00:13:12.605 }, 00:13:12.605 { 00:13:12.605 "dma_device_id": "system", 00:13:12.605 "dma_device_type": 1 00:13:12.605 }, 00:13:12.605 { 00:13:12.605 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.605 "dma_device_type": 2 00:13:12.605 } 00:13:12.605 ], 00:13:12.605 "driver_specific": { 00:13:12.605 "raid": { 00:13:12.605 "uuid": "c328759d-4225-11ef-aa83-81fbc7dfef58", 00:13:12.605 "strip_size_kb": 64, 00:13:12.605 "state": "online", 00:13:12.605 "raid_level": "raid0", 00:13:12.605 "superblock": false, 00:13:12.605 "num_base_bdevs": 4, 00:13:12.605 "num_base_bdevs_discovered": 4, 00:13:12.605 "num_base_bdevs_operational": 4, 00:13:12.606 "base_bdevs_list": [ 00:13:12.606 { 00:13:12.606 "name": "NewBaseBdev", 00:13:12.606 "uuid": "bfc47487-4225-11ef-aa83-81fbc7dfef58", 00:13:12.606 "is_configured": true, 00:13:12.606 "data_offset": 0, 00:13:12.606 "data_size": 65536 00:13:12.606 }, 00:13:12.606 { 00:13:12.606 "name": "BaseBdev2", 00:13:12.606 "uuid": "bd617ba2-4225-11ef-aa83-81fbc7dfef58", 00:13:12.606 "is_configured": true, 00:13:12.606 "data_offset": 0, 00:13:12.606 "data_size": 65536 00:13:12.606 }, 00:13:12.606 { 00:13:12.606 "name": "BaseBdev3", 00:13:12.606 "uuid": "bdd43b28-4225-11ef-aa83-81fbc7dfef58", 00:13:12.606 "is_configured": true, 00:13:12.606 "data_offset": 0, 00:13:12.606 "data_size": 65536 00:13:12.606 }, 00:13:12.606 { 00:13:12.606 "name": "BaseBdev4", 00:13:12.606 "uuid": "be404430-4225-11ef-aa83-81fbc7dfef58", 00:13:12.606 "is_configured": true, 00:13:12.606 "data_offset": 0, 00:13:12.606 "data_size": 65536 00:13:12.606 } 00:13:12.606 ] 00:13:12.606 } 00:13:12.606 } 00:13:12.606 }' 00:13:12.606 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:12.606 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:12.606 BaseBdev2 00:13:12.606 BaseBdev3 00:13:12.606 BaseBdev4' 00:13:12.606 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:12.606 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:12.606 21:12:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:12.862 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:12.862 "name": "NewBaseBdev", 00:13:12.862 "aliases": [ 00:13:12.862 "bfc47487-4225-11ef-aa83-81fbc7dfef58" 00:13:12.862 ], 00:13:12.862 "product_name": "Malloc disk", 00:13:12.862 "block_size": 512, 00:13:12.862 "num_blocks": 65536, 00:13:12.862 "uuid": "bfc47487-4225-11ef-aa83-81fbc7dfef58", 00:13:12.862 "assigned_rate_limits": { 00:13:12.862 "rw_ios_per_sec": 0, 00:13:12.862 "rw_mbytes_per_sec": 0, 00:13:12.862 "r_mbytes_per_sec": 0, 00:13:12.862 "w_mbytes_per_sec": 0 00:13:12.863 }, 00:13:12.863 "claimed": true, 00:13:12.863 "claim_type": "exclusive_write", 00:13:12.863 "zoned": false, 00:13:12.863 "supported_io_types": { 00:13:12.863 "read": true, 00:13:12.863 "write": true, 00:13:12.863 "unmap": true, 00:13:12.863 "flush": true, 00:13:12.863 "reset": true, 00:13:12.863 "nvme_admin": false, 00:13:12.863 "nvme_io": false, 00:13:12.863 "nvme_io_md": false, 00:13:12.863 "write_zeroes": true, 00:13:12.863 "zcopy": true, 00:13:12.863 "get_zone_info": false, 00:13:12.863 "zone_management": false, 00:13:12.863 "zone_append": false, 00:13:12.863 "compare": false, 00:13:12.863 "compare_and_write": false, 00:13:12.863 "abort": true, 00:13:12.863 "seek_hole": false, 00:13:12.863 "seek_data": false, 00:13:12.863 "copy": true, 00:13:12.863 "nvme_iov_md": false 00:13:12.863 }, 00:13:12.863 "memory_domains": [ 00:13:12.863 { 00:13:12.863 "dma_device_id": "system", 00:13:12.863 "dma_device_type": 1 00:13:12.863 }, 00:13:12.863 { 00:13:12.863 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:12.863 "dma_device_type": 2 00:13:12.863 } 00:13:12.863 ], 00:13:12.863 "driver_specific": {} 00:13:12.863 }' 00:13:12.863 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:12.863 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:12.863 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:12.863 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:12.863 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:12.863 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:12.863 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:12.863 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:12.863 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:12.863 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:12.863 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:12.863 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:12.863 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:12.863 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:12.863 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:13.120 "name": "BaseBdev2", 00:13:13.120 "aliases": [ 00:13:13.120 "bd617ba2-4225-11ef-aa83-81fbc7dfef58" 00:13:13.120 ], 00:13:13.120 "product_name": "Malloc disk", 00:13:13.120 "block_size": 512, 00:13:13.120 "num_blocks": 65536, 00:13:13.120 "uuid": "bd617ba2-4225-11ef-aa83-81fbc7dfef58", 00:13:13.120 "assigned_rate_limits": { 00:13:13.120 "rw_ios_per_sec": 0, 00:13:13.120 "rw_mbytes_per_sec": 0, 00:13:13.120 "r_mbytes_per_sec": 0, 00:13:13.120 "w_mbytes_per_sec": 0 00:13:13.120 }, 00:13:13.120 "claimed": true, 00:13:13.120 "claim_type": "exclusive_write", 00:13:13.120 "zoned": false, 00:13:13.120 "supported_io_types": { 00:13:13.120 "read": true, 00:13:13.120 "write": true, 00:13:13.120 "unmap": true, 00:13:13.120 "flush": true, 00:13:13.120 "reset": true, 00:13:13.120 "nvme_admin": false, 00:13:13.120 "nvme_io": false, 00:13:13.120 "nvme_io_md": false, 00:13:13.120 "write_zeroes": true, 00:13:13.120 "zcopy": true, 00:13:13.120 "get_zone_info": false, 00:13:13.120 "zone_management": false, 00:13:13.120 "zone_append": false, 00:13:13.120 "compare": false, 00:13:13.120 "compare_and_write": false, 00:13:13.120 "abort": true, 00:13:13.120 "seek_hole": false, 00:13:13.120 "seek_data": false, 00:13:13.120 "copy": true, 00:13:13.120 "nvme_iov_md": false 00:13:13.120 }, 00:13:13.120 "memory_domains": [ 00:13:13.120 { 00:13:13.120 "dma_device_id": "system", 00:13:13.120 "dma_device_type": 1 00:13:13.120 }, 00:13:13.120 { 00:13:13.120 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.120 "dma_device_type": 2 00:13:13.120 } 00:13:13.120 ], 00:13:13.120 "driver_specific": {} 00:13:13.120 }' 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:13.120 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:13.378 "name": "BaseBdev3", 00:13:13.378 "aliases": [ 00:13:13.378 "bdd43b28-4225-11ef-aa83-81fbc7dfef58" 00:13:13.378 ], 00:13:13.378 "product_name": "Malloc disk", 00:13:13.378 "block_size": 512, 00:13:13.378 "num_blocks": 65536, 00:13:13.378 "uuid": "bdd43b28-4225-11ef-aa83-81fbc7dfef58", 00:13:13.378 "assigned_rate_limits": { 00:13:13.378 "rw_ios_per_sec": 0, 00:13:13.378 "rw_mbytes_per_sec": 0, 00:13:13.378 "r_mbytes_per_sec": 0, 00:13:13.378 "w_mbytes_per_sec": 0 00:13:13.378 }, 00:13:13.378 "claimed": true, 00:13:13.378 "claim_type": "exclusive_write", 00:13:13.378 "zoned": false, 00:13:13.378 "supported_io_types": { 00:13:13.378 "read": true, 00:13:13.378 "write": true, 00:13:13.378 "unmap": true, 00:13:13.378 "flush": true, 00:13:13.378 "reset": true, 00:13:13.378 "nvme_admin": false, 00:13:13.378 "nvme_io": false, 00:13:13.378 "nvme_io_md": false, 00:13:13.378 "write_zeroes": true, 00:13:13.378 "zcopy": true, 00:13:13.378 "get_zone_info": false, 00:13:13.378 "zone_management": false, 00:13:13.378 "zone_append": false, 00:13:13.378 "compare": false, 00:13:13.378 "compare_and_write": false, 00:13:13.378 "abort": true, 00:13:13.378 "seek_hole": false, 00:13:13.378 "seek_data": false, 00:13:13.378 "copy": true, 00:13:13.378 "nvme_iov_md": false 00:13:13.378 }, 00:13:13.378 "memory_domains": [ 00:13:13.378 { 00:13:13.378 "dma_device_id": "system", 00:13:13.378 "dma_device_type": 1 00:13:13.378 }, 00:13:13.378 { 00:13:13.378 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.378 "dma_device_type": 2 00:13:13.378 } 00:13:13.378 ], 00:13:13.378 "driver_specific": {} 00:13:13.378 }' 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:13.378 21:12:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:13.635 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:13.635 "name": "BaseBdev4", 00:13:13.635 "aliases": [ 00:13:13.635 "be404430-4225-11ef-aa83-81fbc7dfef58" 00:13:13.635 ], 00:13:13.635 "product_name": "Malloc disk", 00:13:13.635 "block_size": 512, 00:13:13.635 "num_blocks": 65536, 00:13:13.635 "uuid": "be404430-4225-11ef-aa83-81fbc7dfef58", 00:13:13.635 "assigned_rate_limits": { 00:13:13.635 "rw_ios_per_sec": 0, 00:13:13.635 "rw_mbytes_per_sec": 0, 00:13:13.635 "r_mbytes_per_sec": 0, 00:13:13.635 "w_mbytes_per_sec": 0 00:13:13.635 }, 00:13:13.635 "claimed": true, 00:13:13.635 "claim_type": "exclusive_write", 00:13:13.635 "zoned": false, 00:13:13.635 "supported_io_types": { 00:13:13.635 "read": true, 00:13:13.635 "write": true, 00:13:13.635 "unmap": true, 00:13:13.635 "flush": true, 00:13:13.635 "reset": true, 00:13:13.635 "nvme_admin": false, 00:13:13.635 "nvme_io": false, 00:13:13.635 "nvme_io_md": false, 00:13:13.635 "write_zeroes": true, 00:13:13.635 "zcopy": true, 00:13:13.635 "get_zone_info": false, 00:13:13.635 "zone_management": false, 00:13:13.635 "zone_append": false, 00:13:13.635 "compare": false, 00:13:13.635 "compare_and_write": false, 00:13:13.635 "abort": true, 00:13:13.635 "seek_hole": false, 00:13:13.635 "seek_data": false, 00:13:13.635 "copy": true, 00:13:13.635 "nvme_iov_md": false 00:13:13.635 }, 00:13:13.635 "memory_domains": [ 00:13:13.635 { 00:13:13.635 "dma_device_id": "system", 00:13:13.635 "dma_device_type": 1 00:13:13.635 }, 00:13:13.635 { 00:13:13.635 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:13.635 "dma_device_type": 2 00:13:13.635 } 00:13:13.635 ], 00:13:13.635 "driver_specific": {} 00:13:13.635 }' 00:13:13.635 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:13.635 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:13.635 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:13.635 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:13.635 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:13.635 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:13.635 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:13.635 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:13.635 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:13.635 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:13.893 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:13.893 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:13.893 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:13.893 [2024-07-14 21:12:25.412554] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:13.893 [2024-07-14 21:12:25.412566] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:13.893 [2024-07-14 21:12:25.412588] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:13.893 [2024-07-14 21:12:25.412599] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:13.893 [2024-07-14 21:12:25.412602] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x209dcfc34f00 name Existed_Raid, state offline 00:13:13.893 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 58294 00:13:13.893 21:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 58294 ']' 00:13:13.893 21:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 58294 00:13:13.893 21:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:13:13.893 21:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:13.893 21:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 58294 00:13:13.893 21:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:13:13.893 21:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:13:13.893 killing process with pid 58294 00:13:13.893 21:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:13:13.893 21:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 58294' 00:13:13.893 21:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 58294 00:13:13.893 [2024-07-14 21:12:25.438681] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:13.893 21:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 58294 00:13:14.150 [2024-07-14 21:12:25.472272] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:13:14.408 00:13:14.408 real 0m25.218s 00:13:14.408 user 0m45.789s 00:13:14.408 sys 0m3.703s 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:14.408 ************************************ 00:13:14.408 END TEST raid_state_function_test 00:13:14.408 ************************************ 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:13:14.408 21:12:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:14.408 21:12:25 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid0 4 true 00:13:14.408 21:12:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:14.408 21:12:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:14.408 21:12:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:14.408 ************************************ 00:13:14.408 START TEST raid_state_function_test_sb 00:13:14.408 ************************************ 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid0 4 true 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid0 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid0 '!=' raid1 ']' 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=59105 00:13:14.408 Process raid pid: 59105 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 59105' 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 59105 /var/tmp/spdk-raid.sock 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 59105 ']' 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:14.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:14.408 21:12:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:14.408 [2024-07-14 21:12:25.779650] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:14.408 [2024-07-14 21:12:25.779931] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:14.983 EAL: TSC is not safe to use in SMP mode 00:13:14.983 EAL: TSC is not invariant 00:13:14.983 [2024-07-14 21:12:26.311308] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:14.983 [2024-07-14 21:12:26.411734] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:14.983 [2024-07-14 21:12:26.414244] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.983 [2024-07-14 21:12:26.415072] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:14.983 [2024-07-14 21:12:26.415085] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:15.257 21:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:15.257 21:12:26 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:13:15.257 21:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:15.513 [2024-07-14 21:12:26.958100] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:15.513 [2024-07-14 21:12:26.958156] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:15.513 [2024-07-14 21:12:26.958160] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:15.513 [2024-07-14 21:12:26.958179] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:15.513 [2024-07-14 21:12:26.958182] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:15.513 [2024-07-14 21:12:26.958188] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:15.513 [2024-07-14 21:12:26.958191] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:15.513 [2024-07-14 21:12:26.958197] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:15.513 21:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:15.513 21:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:15.513 21:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:15.513 21:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:15.513 21:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:15.513 21:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:15.513 21:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:15.513 21:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:15.513 21:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:15.513 21:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:15.513 21:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:15.513 21:12:26 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:15.770 21:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:15.770 "name": "Existed_Raid", 00:13:15.770 "uuid": "c5af0aa4-4225-11ef-aa83-81fbc7dfef58", 00:13:15.770 "strip_size_kb": 64, 00:13:15.770 "state": "configuring", 00:13:15.770 "raid_level": "raid0", 00:13:15.770 "superblock": true, 00:13:15.770 "num_base_bdevs": 4, 00:13:15.770 "num_base_bdevs_discovered": 0, 00:13:15.770 "num_base_bdevs_operational": 4, 00:13:15.770 "base_bdevs_list": [ 00:13:15.770 { 00:13:15.770 "name": "BaseBdev1", 00:13:15.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.770 "is_configured": false, 00:13:15.770 "data_offset": 0, 00:13:15.770 "data_size": 0 00:13:15.770 }, 00:13:15.770 { 00:13:15.770 "name": "BaseBdev2", 00:13:15.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.770 "is_configured": false, 00:13:15.770 "data_offset": 0, 00:13:15.770 "data_size": 0 00:13:15.770 }, 00:13:15.770 { 00:13:15.770 "name": "BaseBdev3", 00:13:15.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.770 "is_configured": false, 00:13:15.770 "data_offset": 0, 00:13:15.770 "data_size": 0 00:13:15.770 }, 00:13:15.770 { 00:13:15.770 "name": "BaseBdev4", 00:13:15.770 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:15.770 "is_configured": false, 00:13:15.770 "data_offset": 0, 00:13:15.770 "data_size": 0 00:13:15.770 } 00:13:15.770 ] 00:13:15.770 }' 00:13:15.770 21:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:15.770 21:12:27 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:16.031 21:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:16.288 [2024-07-14 21:12:27.718107] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:16.288 [2024-07-14 21:12:27.718123] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x341b58434500 name Existed_Raid, state configuring 00:13:16.288 21:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:16.546 [2024-07-14 21:12:27.942115] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:16.546 [2024-07-14 21:12:27.942153] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:16.546 [2024-07-14 21:12:27.942157] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:16.546 [2024-07-14 21:12:27.942177] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:16.546 [2024-07-14 21:12:27.942180] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:16.546 [2024-07-14 21:12:27.942186] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:16.546 [2024-07-14 21:12:27.942189] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:16.546 [2024-07-14 21:12:27.942195] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:16.546 21:12:27 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:16.804 [2024-07-14 21:12:28.207229] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:16.804 BaseBdev1 00:13:16.804 21:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:13:16.804 21:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:16.804 21:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:16.804 21:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:16.804 21:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:16.804 21:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:16.804 21:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:17.061 21:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:17.319 [ 00:13:17.319 { 00:13:17.319 "name": "BaseBdev1", 00:13:17.319 "aliases": [ 00:13:17.319 "c66d79f8-4225-11ef-aa83-81fbc7dfef58" 00:13:17.319 ], 00:13:17.319 "product_name": "Malloc disk", 00:13:17.319 "block_size": 512, 00:13:17.319 "num_blocks": 65536, 00:13:17.319 "uuid": "c66d79f8-4225-11ef-aa83-81fbc7dfef58", 00:13:17.319 "assigned_rate_limits": { 00:13:17.319 "rw_ios_per_sec": 0, 00:13:17.319 "rw_mbytes_per_sec": 0, 00:13:17.319 "r_mbytes_per_sec": 0, 00:13:17.319 "w_mbytes_per_sec": 0 00:13:17.319 }, 00:13:17.319 "claimed": true, 00:13:17.319 "claim_type": "exclusive_write", 00:13:17.319 "zoned": false, 00:13:17.319 "supported_io_types": { 00:13:17.319 "read": true, 00:13:17.319 "write": true, 00:13:17.319 "unmap": true, 00:13:17.319 "flush": true, 00:13:17.319 "reset": true, 00:13:17.319 "nvme_admin": false, 00:13:17.319 "nvme_io": false, 00:13:17.319 "nvme_io_md": false, 00:13:17.319 "write_zeroes": true, 00:13:17.319 "zcopy": true, 00:13:17.319 "get_zone_info": false, 00:13:17.319 "zone_management": false, 00:13:17.319 "zone_append": false, 00:13:17.319 "compare": false, 00:13:17.319 "compare_and_write": false, 00:13:17.319 "abort": true, 00:13:17.319 "seek_hole": false, 00:13:17.319 "seek_data": false, 00:13:17.319 "copy": true, 00:13:17.319 "nvme_iov_md": false 00:13:17.319 }, 00:13:17.319 "memory_domains": [ 00:13:17.319 { 00:13:17.319 "dma_device_id": "system", 00:13:17.319 "dma_device_type": 1 00:13:17.319 }, 00:13:17.319 { 00:13:17.319 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:17.319 "dma_device_type": 2 00:13:17.319 } 00:13:17.319 ], 00:13:17.319 "driver_specific": {} 00:13:17.319 } 00:13:17.319 ] 00:13:17.319 21:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:17.319 21:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:17.319 21:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:17.319 21:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:17.319 21:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:17.319 21:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:17.319 21:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:17.319 21:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:17.319 21:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:17.319 21:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:17.319 21:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:17.319 21:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:17.319 21:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:17.577 21:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:17.577 "name": "Existed_Raid", 00:13:17.577 "uuid": "c64530c3-4225-11ef-aa83-81fbc7dfef58", 00:13:17.577 "strip_size_kb": 64, 00:13:17.577 "state": "configuring", 00:13:17.577 "raid_level": "raid0", 00:13:17.577 "superblock": true, 00:13:17.577 "num_base_bdevs": 4, 00:13:17.577 "num_base_bdevs_discovered": 1, 00:13:17.577 "num_base_bdevs_operational": 4, 00:13:17.577 "base_bdevs_list": [ 00:13:17.577 { 00:13:17.577 "name": "BaseBdev1", 00:13:17.577 "uuid": "c66d79f8-4225-11ef-aa83-81fbc7dfef58", 00:13:17.577 "is_configured": true, 00:13:17.577 "data_offset": 2048, 00:13:17.577 "data_size": 63488 00:13:17.577 }, 00:13:17.577 { 00:13:17.577 "name": "BaseBdev2", 00:13:17.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.577 "is_configured": false, 00:13:17.577 "data_offset": 0, 00:13:17.577 "data_size": 0 00:13:17.577 }, 00:13:17.577 { 00:13:17.577 "name": "BaseBdev3", 00:13:17.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.577 "is_configured": false, 00:13:17.577 "data_offset": 0, 00:13:17.577 "data_size": 0 00:13:17.577 }, 00:13:17.577 { 00:13:17.577 "name": "BaseBdev4", 00:13:17.577 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:17.577 "is_configured": false, 00:13:17.577 "data_offset": 0, 00:13:17.577 "data_size": 0 00:13:17.577 } 00:13:17.577 ] 00:13:17.577 }' 00:13:17.577 21:12:28 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:17.577 21:12:28 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:17.835 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:18.093 [2024-07-14 21:12:29.474184] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:18.093 [2024-07-14 21:12:29.474217] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x341b58434500 name Existed_Raid, state configuring 00:13:18.093 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:18.351 [2024-07-14 21:12:29.706194] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:18.351 [2024-07-14 21:12:29.707076] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:13:18.351 [2024-07-14 21:12:29.707115] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:13:18.351 [2024-07-14 21:12:29.707120] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:13:18.351 [2024-07-14 21:12:29.707127] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:13:18.351 [2024-07-14 21:12:29.707130] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:13:18.351 [2024-07-14 21:12:29.707136] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:13:18.351 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:13:18.351 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:18.351 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:18.351 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:18.351 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:18.351 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:18.351 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:18.351 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:18.351 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:18.351 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:18.351 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:18.351 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:18.351 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:18.351 21:12:29 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:18.608 21:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:18.609 "name": "Existed_Raid", 00:13:18.609 "uuid": "c7525df0-4225-11ef-aa83-81fbc7dfef58", 00:13:18.609 "strip_size_kb": 64, 00:13:18.609 "state": "configuring", 00:13:18.609 "raid_level": "raid0", 00:13:18.609 "superblock": true, 00:13:18.609 "num_base_bdevs": 4, 00:13:18.609 "num_base_bdevs_discovered": 1, 00:13:18.609 "num_base_bdevs_operational": 4, 00:13:18.609 "base_bdevs_list": [ 00:13:18.609 { 00:13:18.609 "name": "BaseBdev1", 00:13:18.609 "uuid": "c66d79f8-4225-11ef-aa83-81fbc7dfef58", 00:13:18.609 "is_configured": true, 00:13:18.609 "data_offset": 2048, 00:13:18.609 "data_size": 63488 00:13:18.609 }, 00:13:18.609 { 00:13:18.609 "name": "BaseBdev2", 00:13:18.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.609 "is_configured": false, 00:13:18.609 "data_offset": 0, 00:13:18.609 "data_size": 0 00:13:18.609 }, 00:13:18.609 { 00:13:18.609 "name": "BaseBdev3", 00:13:18.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.609 "is_configured": false, 00:13:18.609 "data_offset": 0, 00:13:18.609 "data_size": 0 00:13:18.609 }, 00:13:18.609 { 00:13:18.609 "name": "BaseBdev4", 00:13:18.609 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:18.609 "is_configured": false, 00:13:18.609 "data_offset": 0, 00:13:18.609 "data_size": 0 00:13:18.609 } 00:13:18.609 ] 00:13:18.609 }' 00:13:18.609 21:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:18.609 21:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:18.867 21:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:19.125 [2024-07-14 21:12:30.546353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:19.125 BaseBdev2 00:13:19.125 21:12:30 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:13:19.125 21:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:19.125 21:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:19.125 21:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:19.125 21:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:19.125 21:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:19.125 21:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:19.383 21:12:30 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:19.641 [ 00:13:19.641 { 00:13:19.641 "name": "BaseBdev2", 00:13:19.641 "aliases": [ 00:13:19.641 "c7d28b0e-4225-11ef-aa83-81fbc7dfef58" 00:13:19.641 ], 00:13:19.641 "product_name": "Malloc disk", 00:13:19.641 "block_size": 512, 00:13:19.641 "num_blocks": 65536, 00:13:19.641 "uuid": "c7d28b0e-4225-11ef-aa83-81fbc7dfef58", 00:13:19.641 "assigned_rate_limits": { 00:13:19.641 "rw_ios_per_sec": 0, 00:13:19.641 "rw_mbytes_per_sec": 0, 00:13:19.641 "r_mbytes_per_sec": 0, 00:13:19.641 "w_mbytes_per_sec": 0 00:13:19.641 }, 00:13:19.641 "claimed": true, 00:13:19.641 "claim_type": "exclusive_write", 00:13:19.641 "zoned": false, 00:13:19.641 "supported_io_types": { 00:13:19.641 "read": true, 00:13:19.641 "write": true, 00:13:19.641 "unmap": true, 00:13:19.641 "flush": true, 00:13:19.641 "reset": true, 00:13:19.641 "nvme_admin": false, 00:13:19.641 "nvme_io": false, 00:13:19.641 "nvme_io_md": false, 00:13:19.641 "write_zeroes": true, 00:13:19.641 "zcopy": true, 00:13:19.641 "get_zone_info": false, 00:13:19.641 "zone_management": false, 00:13:19.641 "zone_append": false, 00:13:19.641 "compare": false, 00:13:19.641 "compare_and_write": false, 00:13:19.641 "abort": true, 00:13:19.641 "seek_hole": false, 00:13:19.641 "seek_data": false, 00:13:19.641 "copy": true, 00:13:19.641 "nvme_iov_md": false 00:13:19.641 }, 00:13:19.641 "memory_domains": [ 00:13:19.641 { 00:13:19.641 "dma_device_id": "system", 00:13:19.641 "dma_device_type": 1 00:13:19.641 }, 00:13:19.641 { 00:13:19.641 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:19.641 "dma_device_type": 2 00:13:19.641 } 00:13:19.641 ], 00:13:19.641 "driver_specific": {} 00:13:19.641 } 00:13:19.641 ] 00:13:19.641 21:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:19.641 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:19.641 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:19.641 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:19.641 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:19.641 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:19.641 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:19.641 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:19.641 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:19.641 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:19.641 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:19.641 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:19.641 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:19.641 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:19.641 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:19.899 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:19.899 "name": "Existed_Raid", 00:13:19.899 "uuid": "c7525df0-4225-11ef-aa83-81fbc7dfef58", 00:13:19.899 "strip_size_kb": 64, 00:13:19.899 "state": "configuring", 00:13:19.900 "raid_level": "raid0", 00:13:19.900 "superblock": true, 00:13:19.900 "num_base_bdevs": 4, 00:13:19.900 "num_base_bdevs_discovered": 2, 00:13:19.900 "num_base_bdevs_operational": 4, 00:13:19.900 "base_bdevs_list": [ 00:13:19.900 { 00:13:19.900 "name": "BaseBdev1", 00:13:19.900 "uuid": "c66d79f8-4225-11ef-aa83-81fbc7dfef58", 00:13:19.900 "is_configured": true, 00:13:19.900 "data_offset": 2048, 00:13:19.900 "data_size": 63488 00:13:19.900 }, 00:13:19.900 { 00:13:19.900 "name": "BaseBdev2", 00:13:19.900 "uuid": "c7d28b0e-4225-11ef-aa83-81fbc7dfef58", 00:13:19.900 "is_configured": true, 00:13:19.900 "data_offset": 2048, 00:13:19.900 "data_size": 63488 00:13:19.900 }, 00:13:19.900 { 00:13:19.900 "name": "BaseBdev3", 00:13:19.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.900 "is_configured": false, 00:13:19.900 "data_offset": 0, 00:13:19.900 "data_size": 0 00:13:19.900 }, 00:13:19.900 { 00:13:19.900 "name": "BaseBdev4", 00:13:19.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:19.900 "is_configured": false, 00:13:19.900 "data_offset": 0, 00:13:19.900 "data_size": 0 00:13:19.900 } 00:13:19.900 ] 00:13:19.900 }' 00:13:19.900 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:19.900 21:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:20.158 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:20.417 [2024-07-14 21:12:31.894292] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:20.417 BaseBdev3 00:13:20.417 21:12:31 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:13:20.417 21:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:20.417 21:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:20.417 21:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:20.417 21:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:20.417 21:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:20.417 21:12:31 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:20.676 21:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:20.934 [ 00:13:20.934 { 00:13:20.934 "name": "BaseBdev3", 00:13:20.934 "aliases": [ 00:13:20.934 "c8a03bbb-4225-11ef-aa83-81fbc7dfef58" 00:13:20.934 ], 00:13:20.934 "product_name": "Malloc disk", 00:13:20.934 "block_size": 512, 00:13:20.934 "num_blocks": 65536, 00:13:20.934 "uuid": "c8a03bbb-4225-11ef-aa83-81fbc7dfef58", 00:13:20.934 "assigned_rate_limits": { 00:13:20.934 "rw_ios_per_sec": 0, 00:13:20.934 "rw_mbytes_per_sec": 0, 00:13:20.934 "r_mbytes_per_sec": 0, 00:13:20.934 "w_mbytes_per_sec": 0 00:13:20.934 }, 00:13:20.934 "claimed": true, 00:13:20.934 "claim_type": "exclusive_write", 00:13:20.934 "zoned": false, 00:13:20.934 "supported_io_types": { 00:13:20.934 "read": true, 00:13:20.934 "write": true, 00:13:20.934 "unmap": true, 00:13:20.934 "flush": true, 00:13:20.934 "reset": true, 00:13:20.934 "nvme_admin": false, 00:13:20.934 "nvme_io": false, 00:13:20.934 "nvme_io_md": false, 00:13:20.934 "write_zeroes": true, 00:13:20.934 "zcopy": true, 00:13:20.934 "get_zone_info": false, 00:13:20.934 "zone_management": false, 00:13:20.934 "zone_append": false, 00:13:20.934 "compare": false, 00:13:20.934 "compare_and_write": false, 00:13:20.934 "abort": true, 00:13:20.934 "seek_hole": false, 00:13:20.934 "seek_data": false, 00:13:20.934 "copy": true, 00:13:20.934 "nvme_iov_md": false 00:13:20.934 }, 00:13:20.934 "memory_domains": [ 00:13:20.934 { 00:13:20.934 "dma_device_id": "system", 00:13:20.934 "dma_device_type": 1 00:13:20.934 }, 00:13:20.934 { 00:13:20.934 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:20.934 "dma_device_type": 2 00:13:20.934 } 00:13:20.934 ], 00:13:20.934 "driver_specific": {} 00:13:20.934 } 00:13:20.934 ] 00:13:20.934 21:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:20.934 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:20.934 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:20.934 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:20.934 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:20.934 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:20.934 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:20.934 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:20.934 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:20.934 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:20.934 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:20.934 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:20.934 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:20.934 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:20.935 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:21.193 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:21.193 "name": "Existed_Raid", 00:13:21.193 "uuid": "c7525df0-4225-11ef-aa83-81fbc7dfef58", 00:13:21.193 "strip_size_kb": 64, 00:13:21.193 "state": "configuring", 00:13:21.193 "raid_level": "raid0", 00:13:21.193 "superblock": true, 00:13:21.193 "num_base_bdevs": 4, 00:13:21.193 "num_base_bdevs_discovered": 3, 00:13:21.193 "num_base_bdevs_operational": 4, 00:13:21.193 "base_bdevs_list": [ 00:13:21.193 { 00:13:21.193 "name": "BaseBdev1", 00:13:21.193 "uuid": "c66d79f8-4225-11ef-aa83-81fbc7dfef58", 00:13:21.193 "is_configured": true, 00:13:21.193 "data_offset": 2048, 00:13:21.193 "data_size": 63488 00:13:21.193 }, 00:13:21.193 { 00:13:21.193 "name": "BaseBdev2", 00:13:21.193 "uuid": "c7d28b0e-4225-11ef-aa83-81fbc7dfef58", 00:13:21.193 "is_configured": true, 00:13:21.193 "data_offset": 2048, 00:13:21.193 "data_size": 63488 00:13:21.193 }, 00:13:21.193 { 00:13:21.193 "name": "BaseBdev3", 00:13:21.193 "uuid": "c8a03bbb-4225-11ef-aa83-81fbc7dfef58", 00:13:21.193 "is_configured": true, 00:13:21.193 "data_offset": 2048, 00:13:21.193 "data_size": 63488 00:13:21.193 }, 00:13:21.193 { 00:13:21.193 "name": "BaseBdev4", 00:13:21.193 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:21.193 "is_configured": false, 00:13:21.193 "data_offset": 0, 00:13:21.193 "data_size": 0 00:13:21.193 } 00:13:21.193 ] 00:13:21.193 }' 00:13:21.193 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:21.193 21:12:32 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:21.451 21:12:32 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:21.710 [2024-07-14 21:12:33.138279] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:21.710 [2024-07-14 21:12:33.138327] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x341b58434a00 00:13:21.710 [2024-07-14 21:12:33.138333] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:21.710 [2024-07-14 21:12:33.138351] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x341b58497e20 00:13:21.710 [2024-07-14 21:12:33.138419] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x341b58434a00 00:13:21.710 [2024-07-14 21:12:33.138423] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x341b58434a00 00:13:21.710 [2024-07-14 21:12:33.138443] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:21.710 BaseBdev4 00:13:21.710 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:13:21.710 21:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:21.710 21:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:21.710 21:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:21.710 21:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:21.710 21:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:21.710 21:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:21.970 21:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:22.227 [ 00:13:22.227 { 00:13:22.227 "name": "BaseBdev4", 00:13:22.227 "aliases": [ 00:13:22.227 "c95e0dd7-4225-11ef-aa83-81fbc7dfef58" 00:13:22.227 ], 00:13:22.227 "product_name": "Malloc disk", 00:13:22.227 "block_size": 512, 00:13:22.227 "num_blocks": 65536, 00:13:22.227 "uuid": "c95e0dd7-4225-11ef-aa83-81fbc7dfef58", 00:13:22.227 "assigned_rate_limits": { 00:13:22.227 "rw_ios_per_sec": 0, 00:13:22.227 "rw_mbytes_per_sec": 0, 00:13:22.227 "r_mbytes_per_sec": 0, 00:13:22.227 "w_mbytes_per_sec": 0 00:13:22.227 }, 00:13:22.227 "claimed": true, 00:13:22.227 "claim_type": "exclusive_write", 00:13:22.227 "zoned": false, 00:13:22.227 "supported_io_types": { 00:13:22.227 "read": true, 00:13:22.227 "write": true, 00:13:22.227 "unmap": true, 00:13:22.227 "flush": true, 00:13:22.227 "reset": true, 00:13:22.227 "nvme_admin": false, 00:13:22.227 "nvme_io": false, 00:13:22.227 "nvme_io_md": false, 00:13:22.227 "write_zeroes": true, 00:13:22.227 "zcopy": true, 00:13:22.227 "get_zone_info": false, 00:13:22.227 "zone_management": false, 00:13:22.227 "zone_append": false, 00:13:22.227 "compare": false, 00:13:22.227 "compare_and_write": false, 00:13:22.227 "abort": true, 00:13:22.227 "seek_hole": false, 00:13:22.227 "seek_data": false, 00:13:22.227 "copy": true, 00:13:22.227 "nvme_iov_md": false 00:13:22.227 }, 00:13:22.227 "memory_domains": [ 00:13:22.227 { 00:13:22.227 "dma_device_id": "system", 00:13:22.227 "dma_device_type": 1 00:13:22.227 }, 00:13:22.227 { 00:13:22.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:22.227 "dma_device_type": 2 00:13:22.227 } 00:13:22.227 ], 00:13:22.227 "driver_specific": {} 00:13:22.227 } 00:13:22.227 ] 00:13:22.227 21:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:22.228 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:13:22.228 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:13:22.228 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:22.228 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:22.228 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:22.228 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:22.228 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:22.228 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:22.228 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:22.228 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:22.228 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:22.228 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:22.228 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:22.228 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:22.485 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:22.485 "name": "Existed_Raid", 00:13:22.485 "uuid": "c7525df0-4225-11ef-aa83-81fbc7dfef58", 00:13:22.485 "strip_size_kb": 64, 00:13:22.485 "state": "online", 00:13:22.485 "raid_level": "raid0", 00:13:22.485 "superblock": true, 00:13:22.485 "num_base_bdevs": 4, 00:13:22.485 "num_base_bdevs_discovered": 4, 00:13:22.485 "num_base_bdevs_operational": 4, 00:13:22.485 "base_bdevs_list": [ 00:13:22.485 { 00:13:22.485 "name": "BaseBdev1", 00:13:22.485 "uuid": "c66d79f8-4225-11ef-aa83-81fbc7dfef58", 00:13:22.485 "is_configured": true, 00:13:22.485 "data_offset": 2048, 00:13:22.485 "data_size": 63488 00:13:22.485 }, 00:13:22.485 { 00:13:22.485 "name": "BaseBdev2", 00:13:22.486 "uuid": "c7d28b0e-4225-11ef-aa83-81fbc7dfef58", 00:13:22.486 "is_configured": true, 00:13:22.486 "data_offset": 2048, 00:13:22.486 "data_size": 63488 00:13:22.486 }, 00:13:22.486 { 00:13:22.486 "name": "BaseBdev3", 00:13:22.486 "uuid": "c8a03bbb-4225-11ef-aa83-81fbc7dfef58", 00:13:22.486 "is_configured": true, 00:13:22.486 "data_offset": 2048, 00:13:22.486 "data_size": 63488 00:13:22.486 }, 00:13:22.486 { 00:13:22.486 "name": "BaseBdev4", 00:13:22.486 "uuid": "c95e0dd7-4225-11ef-aa83-81fbc7dfef58", 00:13:22.486 "is_configured": true, 00:13:22.486 "data_offset": 2048, 00:13:22.486 "data_size": 63488 00:13:22.486 } 00:13:22.486 ] 00:13:22.486 }' 00:13:22.486 21:12:33 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:22.486 21:12:33 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:22.743 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:13:22.744 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:22.744 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:22.744 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:22.744 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:22.744 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:22.744 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:22.744 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:23.001 [2024-07-14 21:12:34.410279] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:23.002 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:23.002 "name": "Existed_Raid", 00:13:23.002 "aliases": [ 00:13:23.002 "c7525df0-4225-11ef-aa83-81fbc7dfef58" 00:13:23.002 ], 00:13:23.002 "product_name": "Raid Volume", 00:13:23.002 "block_size": 512, 00:13:23.002 "num_blocks": 253952, 00:13:23.002 "uuid": "c7525df0-4225-11ef-aa83-81fbc7dfef58", 00:13:23.002 "assigned_rate_limits": { 00:13:23.002 "rw_ios_per_sec": 0, 00:13:23.002 "rw_mbytes_per_sec": 0, 00:13:23.002 "r_mbytes_per_sec": 0, 00:13:23.002 "w_mbytes_per_sec": 0 00:13:23.002 }, 00:13:23.002 "claimed": false, 00:13:23.002 "zoned": false, 00:13:23.002 "supported_io_types": { 00:13:23.002 "read": true, 00:13:23.002 "write": true, 00:13:23.002 "unmap": true, 00:13:23.002 "flush": true, 00:13:23.002 "reset": true, 00:13:23.002 "nvme_admin": false, 00:13:23.002 "nvme_io": false, 00:13:23.002 "nvme_io_md": false, 00:13:23.002 "write_zeroes": true, 00:13:23.002 "zcopy": false, 00:13:23.002 "get_zone_info": false, 00:13:23.002 "zone_management": false, 00:13:23.002 "zone_append": false, 00:13:23.002 "compare": false, 00:13:23.002 "compare_and_write": false, 00:13:23.002 "abort": false, 00:13:23.002 "seek_hole": false, 00:13:23.002 "seek_data": false, 00:13:23.002 "copy": false, 00:13:23.002 "nvme_iov_md": false 00:13:23.002 }, 00:13:23.002 "memory_domains": [ 00:13:23.002 { 00:13:23.002 "dma_device_id": "system", 00:13:23.002 "dma_device_type": 1 00:13:23.002 }, 00:13:23.002 { 00:13:23.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.002 "dma_device_type": 2 00:13:23.002 }, 00:13:23.002 { 00:13:23.002 "dma_device_id": "system", 00:13:23.002 "dma_device_type": 1 00:13:23.002 }, 00:13:23.002 { 00:13:23.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.002 "dma_device_type": 2 00:13:23.002 }, 00:13:23.002 { 00:13:23.002 "dma_device_id": "system", 00:13:23.002 "dma_device_type": 1 00:13:23.002 }, 00:13:23.002 { 00:13:23.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.002 "dma_device_type": 2 00:13:23.002 }, 00:13:23.002 { 00:13:23.002 "dma_device_id": "system", 00:13:23.002 "dma_device_type": 1 00:13:23.002 }, 00:13:23.002 { 00:13:23.002 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.002 "dma_device_type": 2 00:13:23.002 } 00:13:23.002 ], 00:13:23.002 "driver_specific": { 00:13:23.002 "raid": { 00:13:23.002 "uuid": "c7525df0-4225-11ef-aa83-81fbc7dfef58", 00:13:23.002 "strip_size_kb": 64, 00:13:23.002 "state": "online", 00:13:23.002 "raid_level": "raid0", 00:13:23.002 "superblock": true, 00:13:23.002 "num_base_bdevs": 4, 00:13:23.002 "num_base_bdevs_discovered": 4, 00:13:23.002 "num_base_bdevs_operational": 4, 00:13:23.002 "base_bdevs_list": [ 00:13:23.002 { 00:13:23.002 "name": "BaseBdev1", 00:13:23.002 "uuid": "c66d79f8-4225-11ef-aa83-81fbc7dfef58", 00:13:23.002 "is_configured": true, 00:13:23.002 "data_offset": 2048, 00:13:23.002 "data_size": 63488 00:13:23.002 }, 00:13:23.002 { 00:13:23.002 "name": "BaseBdev2", 00:13:23.002 "uuid": "c7d28b0e-4225-11ef-aa83-81fbc7dfef58", 00:13:23.002 "is_configured": true, 00:13:23.002 "data_offset": 2048, 00:13:23.002 "data_size": 63488 00:13:23.002 }, 00:13:23.002 { 00:13:23.002 "name": "BaseBdev3", 00:13:23.002 "uuid": "c8a03bbb-4225-11ef-aa83-81fbc7dfef58", 00:13:23.002 "is_configured": true, 00:13:23.002 "data_offset": 2048, 00:13:23.002 "data_size": 63488 00:13:23.002 }, 00:13:23.002 { 00:13:23.002 "name": "BaseBdev4", 00:13:23.002 "uuid": "c95e0dd7-4225-11ef-aa83-81fbc7dfef58", 00:13:23.002 "is_configured": true, 00:13:23.002 "data_offset": 2048, 00:13:23.002 "data_size": 63488 00:13:23.002 } 00:13:23.002 ] 00:13:23.002 } 00:13:23.002 } 00:13:23.002 }' 00:13:23.002 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:23.002 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:13:23.002 BaseBdev2 00:13:23.002 BaseBdev3 00:13:23.002 BaseBdev4' 00:13:23.002 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:23.002 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:23.002 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:23.261 "name": "BaseBdev1", 00:13:23.261 "aliases": [ 00:13:23.261 "c66d79f8-4225-11ef-aa83-81fbc7dfef58" 00:13:23.261 ], 00:13:23.261 "product_name": "Malloc disk", 00:13:23.261 "block_size": 512, 00:13:23.261 "num_blocks": 65536, 00:13:23.261 "uuid": "c66d79f8-4225-11ef-aa83-81fbc7dfef58", 00:13:23.261 "assigned_rate_limits": { 00:13:23.261 "rw_ios_per_sec": 0, 00:13:23.261 "rw_mbytes_per_sec": 0, 00:13:23.261 "r_mbytes_per_sec": 0, 00:13:23.261 "w_mbytes_per_sec": 0 00:13:23.261 }, 00:13:23.261 "claimed": true, 00:13:23.261 "claim_type": "exclusive_write", 00:13:23.261 "zoned": false, 00:13:23.261 "supported_io_types": { 00:13:23.261 "read": true, 00:13:23.261 "write": true, 00:13:23.261 "unmap": true, 00:13:23.261 "flush": true, 00:13:23.261 "reset": true, 00:13:23.261 "nvme_admin": false, 00:13:23.261 "nvme_io": false, 00:13:23.261 "nvme_io_md": false, 00:13:23.261 "write_zeroes": true, 00:13:23.261 "zcopy": true, 00:13:23.261 "get_zone_info": false, 00:13:23.261 "zone_management": false, 00:13:23.261 "zone_append": false, 00:13:23.261 "compare": false, 00:13:23.261 "compare_and_write": false, 00:13:23.261 "abort": true, 00:13:23.261 "seek_hole": false, 00:13:23.261 "seek_data": false, 00:13:23.261 "copy": true, 00:13:23.261 "nvme_iov_md": false 00:13:23.261 }, 00:13:23.261 "memory_domains": [ 00:13:23.261 { 00:13:23.261 "dma_device_id": "system", 00:13:23.261 "dma_device_type": 1 00:13:23.261 }, 00:13:23.261 { 00:13:23.261 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.261 "dma_device_type": 2 00:13:23.261 } 00:13:23.261 ], 00:13:23.261 "driver_specific": {} 00:13:23.261 }' 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:23.261 21:12:34 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:23.519 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:23.519 "name": "BaseBdev2", 00:13:23.519 "aliases": [ 00:13:23.519 "c7d28b0e-4225-11ef-aa83-81fbc7dfef58" 00:13:23.519 ], 00:13:23.519 "product_name": "Malloc disk", 00:13:23.519 "block_size": 512, 00:13:23.519 "num_blocks": 65536, 00:13:23.519 "uuid": "c7d28b0e-4225-11ef-aa83-81fbc7dfef58", 00:13:23.519 "assigned_rate_limits": { 00:13:23.519 "rw_ios_per_sec": 0, 00:13:23.519 "rw_mbytes_per_sec": 0, 00:13:23.519 "r_mbytes_per_sec": 0, 00:13:23.519 "w_mbytes_per_sec": 0 00:13:23.519 }, 00:13:23.519 "claimed": true, 00:13:23.519 "claim_type": "exclusive_write", 00:13:23.519 "zoned": false, 00:13:23.519 "supported_io_types": { 00:13:23.519 "read": true, 00:13:23.519 "write": true, 00:13:23.519 "unmap": true, 00:13:23.519 "flush": true, 00:13:23.519 "reset": true, 00:13:23.519 "nvme_admin": false, 00:13:23.519 "nvme_io": false, 00:13:23.519 "nvme_io_md": false, 00:13:23.519 "write_zeroes": true, 00:13:23.519 "zcopy": true, 00:13:23.519 "get_zone_info": false, 00:13:23.519 "zone_management": false, 00:13:23.519 "zone_append": false, 00:13:23.519 "compare": false, 00:13:23.519 "compare_and_write": false, 00:13:23.519 "abort": true, 00:13:23.519 "seek_hole": false, 00:13:23.519 "seek_data": false, 00:13:23.519 "copy": true, 00:13:23.519 "nvme_iov_md": false 00:13:23.519 }, 00:13:23.519 "memory_domains": [ 00:13:23.519 { 00:13:23.519 "dma_device_id": "system", 00:13:23.519 "dma_device_type": 1 00:13:23.519 }, 00:13:23.519 { 00:13:23.519 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:23.519 "dma_device_type": 2 00:13:23.519 } 00:13:23.519 ], 00:13:23.519 "driver_specific": {} 00:13:23.519 }' 00:13:23.519 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:23.520 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:23.520 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:23.520 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:23.778 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:23.778 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:23.778 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:23.778 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:23.778 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:23.778 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:23.778 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:23.778 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:23.778 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:23.778 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:23.778 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:24.037 "name": "BaseBdev3", 00:13:24.037 "aliases": [ 00:13:24.037 "c8a03bbb-4225-11ef-aa83-81fbc7dfef58" 00:13:24.037 ], 00:13:24.037 "product_name": "Malloc disk", 00:13:24.037 "block_size": 512, 00:13:24.037 "num_blocks": 65536, 00:13:24.037 "uuid": "c8a03bbb-4225-11ef-aa83-81fbc7dfef58", 00:13:24.037 "assigned_rate_limits": { 00:13:24.037 "rw_ios_per_sec": 0, 00:13:24.037 "rw_mbytes_per_sec": 0, 00:13:24.037 "r_mbytes_per_sec": 0, 00:13:24.037 "w_mbytes_per_sec": 0 00:13:24.037 }, 00:13:24.037 "claimed": true, 00:13:24.037 "claim_type": "exclusive_write", 00:13:24.037 "zoned": false, 00:13:24.037 "supported_io_types": { 00:13:24.037 "read": true, 00:13:24.037 "write": true, 00:13:24.037 "unmap": true, 00:13:24.037 "flush": true, 00:13:24.037 "reset": true, 00:13:24.037 "nvme_admin": false, 00:13:24.037 "nvme_io": false, 00:13:24.037 "nvme_io_md": false, 00:13:24.037 "write_zeroes": true, 00:13:24.037 "zcopy": true, 00:13:24.037 "get_zone_info": false, 00:13:24.037 "zone_management": false, 00:13:24.037 "zone_append": false, 00:13:24.037 "compare": false, 00:13:24.037 "compare_and_write": false, 00:13:24.037 "abort": true, 00:13:24.037 "seek_hole": false, 00:13:24.037 "seek_data": false, 00:13:24.037 "copy": true, 00:13:24.037 "nvme_iov_md": false 00:13:24.037 }, 00:13:24.037 "memory_domains": [ 00:13:24.037 { 00:13:24.037 "dma_device_id": "system", 00:13:24.037 "dma_device_type": 1 00:13:24.037 }, 00:13:24.037 { 00:13:24.037 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.037 "dma_device_type": 2 00:13:24.037 } 00:13:24.037 ], 00:13:24.037 "driver_specific": {} 00:13:24.037 }' 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:24.037 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:24.296 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:24.296 "name": "BaseBdev4", 00:13:24.296 "aliases": [ 00:13:24.296 "c95e0dd7-4225-11ef-aa83-81fbc7dfef58" 00:13:24.296 ], 00:13:24.296 "product_name": "Malloc disk", 00:13:24.296 "block_size": 512, 00:13:24.296 "num_blocks": 65536, 00:13:24.296 "uuid": "c95e0dd7-4225-11ef-aa83-81fbc7dfef58", 00:13:24.296 "assigned_rate_limits": { 00:13:24.296 "rw_ios_per_sec": 0, 00:13:24.296 "rw_mbytes_per_sec": 0, 00:13:24.296 "r_mbytes_per_sec": 0, 00:13:24.296 "w_mbytes_per_sec": 0 00:13:24.296 }, 00:13:24.296 "claimed": true, 00:13:24.296 "claim_type": "exclusive_write", 00:13:24.296 "zoned": false, 00:13:24.296 "supported_io_types": { 00:13:24.296 "read": true, 00:13:24.296 "write": true, 00:13:24.296 "unmap": true, 00:13:24.296 "flush": true, 00:13:24.296 "reset": true, 00:13:24.296 "nvme_admin": false, 00:13:24.296 "nvme_io": false, 00:13:24.296 "nvme_io_md": false, 00:13:24.296 "write_zeroes": true, 00:13:24.296 "zcopy": true, 00:13:24.296 "get_zone_info": false, 00:13:24.296 "zone_management": false, 00:13:24.296 "zone_append": false, 00:13:24.296 "compare": false, 00:13:24.296 "compare_and_write": false, 00:13:24.296 "abort": true, 00:13:24.296 "seek_hole": false, 00:13:24.296 "seek_data": false, 00:13:24.296 "copy": true, 00:13:24.296 "nvme_iov_md": false 00:13:24.296 }, 00:13:24.296 "memory_domains": [ 00:13:24.296 { 00:13:24.296 "dma_device_id": "system", 00:13:24.296 "dma_device_type": 1 00:13:24.296 }, 00:13:24.296 { 00:13:24.296 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:24.296 "dma_device_type": 2 00:13:24.296 } 00:13:24.296 ], 00:13:24.296 "driver_specific": {} 00:13:24.296 }' 00:13:24.296 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:24.296 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:24.296 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:24.296 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:24.296 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:24.296 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:24.296 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:24.296 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:24.296 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:24.296 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:24.296 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:24.296 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:24.296 21:12:35 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:24.554 [2024-07-14 21:12:36.070275] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:24.554 [2024-07-14 21:12:36.070293] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:24.554 [2024-07-14 21:12:36.070326] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid0 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline raid0 64 3 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:24.554 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:25.121 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:25.121 "name": "Existed_Raid", 00:13:25.121 "uuid": "c7525df0-4225-11ef-aa83-81fbc7dfef58", 00:13:25.121 "strip_size_kb": 64, 00:13:25.121 "state": "offline", 00:13:25.121 "raid_level": "raid0", 00:13:25.121 "superblock": true, 00:13:25.121 "num_base_bdevs": 4, 00:13:25.121 "num_base_bdevs_discovered": 3, 00:13:25.121 "num_base_bdevs_operational": 3, 00:13:25.121 "base_bdevs_list": [ 00:13:25.121 { 00:13:25.121 "name": null, 00:13:25.121 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:25.121 "is_configured": false, 00:13:25.121 "data_offset": 2048, 00:13:25.121 "data_size": 63488 00:13:25.121 }, 00:13:25.121 { 00:13:25.121 "name": "BaseBdev2", 00:13:25.121 "uuid": "c7d28b0e-4225-11ef-aa83-81fbc7dfef58", 00:13:25.121 "is_configured": true, 00:13:25.121 "data_offset": 2048, 00:13:25.121 "data_size": 63488 00:13:25.121 }, 00:13:25.121 { 00:13:25.121 "name": "BaseBdev3", 00:13:25.121 "uuid": "c8a03bbb-4225-11ef-aa83-81fbc7dfef58", 00:13:25.121 "is_configured": true, 00:13:25.121 "data_offset": 2048, 00:13:25.121 "data_size": 63488 00:13:25.121 }, 00:13:25.121 { 00:13:25.121 "name": "BaseBdev4", 00:13:25.121 "uuid": "c95e0dd7-4225-11ef-aa83-81fbc7dfef58", 00:13:25.121 "is_configured": true, 00:13:25.121 "data_offset": 2048, 00:13:25.121 "data_size": 63488 00:13:25.121 } 00:13:25.121 ] 00:13:25.121 }' 00:13:25.121 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:25.121 21:12:36 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:25.121 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:13:25.121 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:25.121 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.121 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:25.381 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:25.381 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.381 21:12:36 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:13:25.673 [2024-07-14 21:12:37.050513] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:25.673 21:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:25.673 21:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:25.673 21:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:25.673 21:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:25.998 21:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:25.998 21:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:25.998 21:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:13:26.257 [2024-07-14 21:12:37.595042] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:26.257 21:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:26.257 21:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:26.257 21:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.257 21:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:13:26.516 21:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:13:26.516 21:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:13:26.516 21:12:37 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:13:26.516 [2024-07-14 21:12:38.023479] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:13:26.516 [2024-07-14 21:12:38.023498] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x341b58434a00 name Existed_Raid, state offline 00:13:26.516 21:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:13:26.516 21:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:13:26.516 21:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:26.516 21:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:13:26.774 21:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:13:26.774 21:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:13:26.774 21:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:13:26.774 21:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:13:26.774 21:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:26.774 21:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:13:27.033 BaseBdev2 00:13:27.033 21:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:13:27.033 21:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:13:27.033 21:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:27.033 21:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:27.033 21:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:27.033 21:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:27.033 21:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:27.291 21:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:13:27.549 [ 00:13:27.549 { 00:13:27.549 "name": "BaseBdev2", 00:13:27.549 "aliases": [ 00:13:27.549 "cc916d61-4225-11ef-aa83-81fbc7dfef58" 00:13:27.549 ], 00:13:27.549 "product_name": "Malloc disk", 00:13:27.549 "block_size": 512, 00:13:27.549 "num_blocks": 65536, 00:13:27.549 "uuid": "cc916d61-4225-11ef-aa83-81fbc7dfef58", 00:13:27.549 "assigned_rate_limits": { 00:13:27.549 "rw_ios_per_sec": 0, 00:13:27.549 "rw_mbytes_per_sec": 0, 00:13:27.549 "r_mbytes_per_sec": 0, 00:13:27.549 "w_mbytes_per_sec": 0 00:13:27.549 }, 00:13:27.549 "claimed": false, 00:13:27.549 "zoned": false, 00:13:27.549 "supported_io_types": { 00:13:27.549 "read": true, 00:13:27.549 "write": true, 00:13:27.549 "unmap": true, 00:13:27.549 "flush": true, 00:13:27.549 "reset": true, 00:13:27.549 "nvme_admin": false, 00:13:27.549 "nvme_io": false, 00:13:27.549 "nvme_io_md": false, 00:13:27.549 "write_zeroes": true, 00:13:27.549 "zcopy": true, 00:13:27.549 "get_zone_info": false, 00:13:27.549 "zone_management": false, 00:13:27.549 "zone_append": false, 00:13:27.549 "compare": false, 00:13:27.549 "compare_and_write": false, 00:13:27.549 "abort": true, 00:13:27.549 "seek_hole": false, 00:13:27.549 "seek_data": false, 00:13:27.549 "copy": true, 00:13:27.549 "nvme_iov_md": false 00:13:27.549 }, 00:13:27.549 "memory_domains": [ 00:13:27.549 { 00:13:27.549 "dma_device_id": "system", 00:13:27.549 "dma_device_type": 1 00:13:27.549 }, 00:13:27.549 { 00:13:27.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:27.549 "dma_device_type": 2 00:13:27.549 } 00:13:27.549 ], 00:13:27.549 "driver_specific": {} 00:13:27.549 } 00:13:27.549 ] 00:13:27.549 21:12:38 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:27.549 21:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:27.549 21:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:27.549 21:12:38 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:13:27.808 BaseBdev3 00:13:27.808 21:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:13:27.808 21:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:13:27.808 21:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:27.808 21:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:27.808 21:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:27.808 21:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:27.808 21:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:28.066 21:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:13:28.066 [ 00:13:28.066 { 00:13:28.066 "name": "BaseBdev3", 00:13:28.066 "aliases": [ 00:13:28.066 "ccf9cc94-4225-11ef-aa83-81fbc7dfef58" 00:13:28.066 ], 00:13:28.066 "product_name": "Malloc disk", 00:13:28.066 "block_size": 512, 00:13:28.066 "num_blocks": 65536, 00:13:28.066 "uuid": "ccf9cc94-4225-11ef-aa83-81fbc7dfef58", 00:13:28.066 "assigned_rate_limits": { 00:13:28.066 "rw_ios_per_sec": 0, 00:13:28.066 "rw_mbytes_per_sec": 0, 00:13:28.066 "r_mbytes_per_sec": 0, 00:13:28.066 "w_mbytes_per_sec": 0 00:13:28.066 }, 00:13:28.066 "claimed": false, 00:13:28.066 "zoned": false, 00:13:28.066 "supported_io_types": { 00:13:28.066 "read": true, 00:13:28.066 "write": true, 00:13:28.066 "unmap": true, 00:13:28.066 "flush": true, 00:13:28.066 "reset": true, 00:13:28.066 "nvme_admin": false, 00:13:28.066 "nvme_io": false, 00:13:28.066 "nvme_io_md": false, 00:13:28.066 "write_zeroes": true, 00:13:28.066 "zcopy": true, 00:13:28.066 "get_zone_info": false, 00:13:28.066 "zone_management": false, 00:13:28.066 "zone_append": false, 00:13:28.066 "compare": false, 00:13:28.066 "compare_and_write": false, 00:13:28.066 "abort": true, 00:13:28.066 "seek_hole": false, 00:13:28.066 "seek_data": false, 00:13:28.066 "copy": true, 00:13:28.066 "nvme_iov_md": false 00:13:28.066 }, 00:13:28.066 "memory_domains": [ 00:13:28.066 { 00:13:28.066 "dma_device_id": "system", 00:13:28.066 "dma_device_type": 1 00:13:28.066 }, 00:13:28.066 { 00:13:28.066 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.066 "dma_device_type": 2 00:13:28.066 } 00:13:28.066 ], 00:13:28.066 "driver_specific": {} 00:13:28.066 } 00:13:28.066 ] 00:13:28.325 21:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:28.325 21:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:28.325 21:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:28.325 21:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:13:28.325 BaseBdev4 00:13:28.583 21:12:39 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:13:28.583 21:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:13:28.584 21:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:28.584 21:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:28.584 21:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:28.584 21:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:28.584 21:12:39 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:28.843 21:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:13:28.843 [ 00:13:28.843 { 00:13:28.843 "name": "BaseBdev4", 00:13:28.843 "aliases": [ 00:13:28.843 "cd5fbad4-4225-11ef-aa83-81fbc7dfef58" 00:13:28.843 ], 00:13:28.843 "product_name": "Malloc disk", 00:13:28.843 "block_size": 512, 00:13:28.843 "num_blocks": 65536, 00:13:28.843 "uuid": "cd5fbad4-4225-11ef-aa83-81fbc7dfef58", 00:13:28.843 "assigned_rate_limits": { 00:13:28.843 "rw_ios_per_sec": 0, 00:13:28.843 "rw_mbytes_per_sec": 0, 00:13:28.843 "r_mbytes_per_sec": 0, 00:13:28.843 "w_mbytes_per_sec": 0 00:13:28.843 }, 00:13:28.843 "claimed": false, 00:13:28.843 "zoned": false, 00:13:28.843 "supported_io_types": { 00:13:28.843 "read": true, 00:13:28.843 "write": true, 00:13:28.843 "unmap": true, 00:13:28.843 "flush": true, 00:13:28.843 "reset": true, 00:13:28.843 "nvme_admin": false, 00:13:28.843 "nvme_io": false, 00:13:28.843 "nvme_io_md": false, 00:13:28.843 "write_zeroes": true, 00:13:28.843 "zcopy": true, 00:13:28.843 "get_zone_info": false, 00:13:28.843 "zone_management": false, 00:13:28.843 "zone_append": false, 00:13:28.843 "compare": false, 00:13:28.843 "compare_and_write": false, 00:13:28.843 "abort": true, 00:13:28.843 "seek_hole": false, 00:13:28.843 "seek_data": false, 00:13:28.843 "copy": true, 00:13:28.843 "nvme_iov_md": false 00:13:28.843 }, 00:13:28.843 "memory_domains": [ 00:13:28.843 { 00:13:28.843 "dma_device_id": "system", 00:13:28.843 "dma_device_type": 1 00:13:28.843 }, 00:13:28.843 { 00:13:28.843 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:28.843 "dma_device_type": 2 00:13:28.843 } 00:13:28.843 ], 00:13:28.843 "driver_specific": {} 00:13:28.843 } 00:13:28.843 ] 00:13:28.843 21:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:28.843 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:13:28.843 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:13:28.843 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:13:29.102 [2024-07-14 21:12:40.596110] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:13:29.102 [2024-07-14 21:12:40.596167] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:13:29.102 [2024-07-14 21:12:40.596174] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:29.102 [2024-07-14 21:12:40.596539] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:29.102 [2024-07-14 21:12:40.596548] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:29.102 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:29.102 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:29.102 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:29.102 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:29.102 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:29.102 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:29.102 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:29.102 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:29.102 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:29.102 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:29.102 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.102 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:29.361 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:29.361 "name": "Existed_Raid", 00:13:29.361 "uuid": "cdd0096c-4225-11ef-aa83-81fbc7dfef58", 00:13:29.361 "strip_size_kb": 64, 00:13:29.361 "state": "configuring", 00:13:29.361 "raid_level": "raid0", 00:13:29.361 "superblock": true, 00:13:29.361 "num_base_bdevs": 4, 00:13:29.361 "num_base_bdevs_discovered": 3, 00:13:29.361 "num_base_bdevs_operational": 4, 00:13:29.361 "base_bdevs_list": [ 00:13:29.361 { 00:13:29.361 "name": "BaseBdev1", 00:13:29.361 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:29.361 "is_configured": false, 00:13:29.361 "data_offset": 0, 00:13:29.361 "data_size": 0 00:13:29.361 }, 00:13:29.361 { 00:13:29.361 "name": "BaseBdev2", 00:13:29.361 "uuid": "cc916d61-4225-11ef-aa83-81fbc7dfef58", 00:13:29.361 "is_configured": true, 00:13:29.361 "data_offset": 2048, 00:13:29.361 "data_size": 63488 00:13:29.361 }, 00:13:29.361 { 00:13:29.361 "name": "BaseBdev3", 00:13:29.361 "uuid": "ccf9cc94-4225-11ef-aa83-81fbc7dfef58", 00:13:29.361 "is_configured": true, 00:13:29.361 "data_offset": 2048, 00:13:29.361 "data_size": 63488 00:13:29.361 }, 00:13:29.361 { 00:13:29.361 "name": "BaseBdev4", 00:13:29.361 "uuid": "cd5fbad4-4225-11ef-aa83-81fbc7dfef58", 00:13:29.361 "is_configured": true, 00:13:29.361 "data_offset": 2048, 00:13:29.361 "data_size": 63488 00:13:29.361 } 00:13:29.361 ] 00:13:29.361 }' 00:13:29.361 21:12:40 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:29.361 21:12:40 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:29.620 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:13:29.879 [2024-07-14 21:12:41.360105] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:13:29.879 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:29.879 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:29.879 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:29.879 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:29.879 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:29.879 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:29.879 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:29.879 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:29.879 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:29.879 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:29.879 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:29.879 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:30.138 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:30.138 "name": "Existed_Raid", 00:13:30.138 "uuid": "cdd0096c-4225-11ef-aa83-81fbc7dfef58", 00:13:30.138 "strip_size_kb": 64, 00:13:30.138 "state": "configuring", 00:13:30.138 "raid_level": "raid0", 00:13:30.138 "superblock": true, 00:13:30.138 "num_base_bdevs": 4, 00:13:30.138 "num_base_bdevs_discovered": 2, 00:13:30.138 "num_base_bdevs_operational": 4, 00:13:30.138 "base_bdevs_list": [ 00:13:30.138 { 00:13:30.138 "name": "BaseBdev1", 00:13:30.138 "uuid": "00000000-0000-0000-0000-000000000000", 00:13:30.138 "is_configured": false, 00:13:30.138 "data_offset": 0, 00:13:30.138 "data_size": 0 00:13:30.138 }, 00:13:30.138 { 00:13:30.138 "name": null, 00:13:30.138 "uuid": "cc916d61-4225-11ef-aa83-81fbc7dfef58", 00:13:30.138 "is_configured": false, 00:13:30.138 "data_offset": 2048, 00:13:30.138 "data_size": 63488 00:13:30.138 }, 00:13:30.138 { 00:13:30.138 "name": "BaseBdev3", 00:13:30.138 "uuid": "ccf9cc94-4225-11ef-aa83-81fbc7dfef58", 00:13:30.138 "is_configured": true, 00:13:30.138 "data_offset": 2048, 00:13:30.138 "data_size": 63488 00:13:30.138 }, 00:13:30.138 { 00:13:30.138 "name": "BaseBdev4", 00:13:30.138 "uuid": "cd5fbad4-4225-11ef-aa83-81fbc7dfef58", 00:13:30.138 "is_configured": true, 00:13:30.138 "data_offset": 2048, 00:13:30.138 "data_size": 63488 00:13:30.138 } 00:13:30.138 ] 00:13:30.138 }' 00:13:30.138 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:30.138 21:12:41 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:30.705 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:30.705 21:12:41 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:30.705 21:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:13:30.705 21:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:13:30.964 [2024-07-14 21:12:42.444267] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:30.964 BaseBdev1 00:13:30.964 21:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:13:30.964 21:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:13:30.964 21:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:30.964 21:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:30.964 21:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:30.964 21:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:30.964 21:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:31.224 21:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:13:31.483 [ 00:13:31.483 { 00:13:31.483 "name": "BaseBdev1", 00:13:31.483 "aliases": [ 00:13:31.483 "ceea0696-4225-11ef-aa83-81fbc7dfef58" 00:13:31.483 ], 00:13:31.483 "product_name": "Malloc disk", 00:13:31.483 "block_size": 512, 00:13:31.483 "num_blocks": 65536, 00:13:31.483 "uuid": "ceea0696-4225-11ef-aa83-81fbc7dfef58", 00:13:31.483 "assigned_rate_limits": { 00:13:31.483 "rw_ios_per_sec": 0, 00:13:31.483 "rw_mbytes_per_sec": 0, 00:13:31.483 "r_mbytes_per_sec": 0, 00:13:31.483 "w_mbytes_per_sec": 0 00:13:31.483 }, 00:13:31.483 "claimed": true, 00:13:31.483 "claim_type": "exclusive_write", 00:13:31.483 "zoned": false, 00:13:31.483 "supported_io_types": { 00:13:31.483 "read": true, 00:13:31.483 "write": true, 00:13:31.483 "unmap": true, 00:13:31.483 "flush": true, 00:13:31.483 "reset": true, 00:13:31.483 "nvme_admin": false, 00:13:31.483 "nvme_io": false, 00:13:31.483 "nvme_io_md": false, 00:13:31.483 "write_zeroes": true, 00:13:31.483 "zcopy": true, 00:13:31.483 "get_zone_info": false, 00:13:31.483 "zone_management": false, 00:13:31.483 "zone_append": false, 00:13:31.483 "compare": false, 00:13:31.483 "compare_and_write": false, 00:13:31.483 "abort": true, 00:13:31.483 "seek_hole": false, 00:13:31.483 "seek_data": false, 00:13:31.483 "copy": true, 00:13:31.483 "nvme_iov_md": false 00:13:31.483 }, 00:13:31.483 "memory_domains": [ 00:13:31.483 { 00:13:31.483 "dma_device_id": "system", 00:13:31.483 "dma_device_type": 1 00:13:31.483 }, 00:13:31.483 { 00:13:31.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:31.483 "dma_device_type": 2 00:13:31.483 } 00:13:31.483 ], 00:13:31.483 "driver_specific": {} 00:13:31.483 } 00:13:31.483 ] 00:13:31.483 21:12:42 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:31.483 21:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:31.483 21:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:31.483 21:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:31.483 21:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:31.483 21:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:31.483 21:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:31.483 21:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:31.483 21:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:31.483 21:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:31.483 21:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:31.483 21:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.483 21:12:42 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:31.742 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:31.742 "name": "Existed_Raid", 00:13:31.742 "uuid": "cdd0096c-4225-11ef-aa83-81fbc7dfef58", 00:13:31.742 "strip_size_kb": 64, 00:13:31.742 "state": "configuring", 00:13:31.742 "raid_level": "raid0", 00:13:31.742 "superblock": true, 00:13:31.742 "num_base_bdevs": 4, 00:13:31.742 "num_base_bdevs_discovered": 3, 00:13:31.742 "num_base_bdevs_operational": 4, 00:13:31.742 "base_bdevs_list": [ 00:13:31.742 { 00:13:31.742 "name": "BaseBdev1", 00:13:31.742 "uuid": "ceea0696-4225-11ef-aa83-81fbc7dfef58", 00:13:31.742 "is_configured": true, 00:13:31.742 "data_offset": 2048, 00:13:31.742 "data_size": 63488 00:13:31.742 }, 00:13:31.742 { 00:13:31.742 "name": null, 00:13:31.742 "uuid": "cc916d61-4225-11ef-aa83-81fbc7dfef58", 00:13:31.742 "is_configured": false, 00:13:31.742 "data_offset": 2048, 00:13:31.742 "data_size": 63488 00:13:31.742 }, 00:13:31.742 { 00:13:31.742 "name": "BaseBdev3", 00:13:31.742 "uuid": "ccf9cc94-4225-11ef-aa83-81fbc7dfef58", 00:13:31.742 "is_configured": true, 00:13:31.742 "data_offset": 2048, 00:13:31.742 "data_size": 63488 00:13:31.742 }, 00:13:31.742 { 00:13:31.742 "name": "BaseBdev4", 00:13:31.742 "uuid": "cd5fbad4-4225-11ef-aa83-81fbc7dfef58", 00:13:31.742 "is_configured": true, 00:13:31.742 "data_offset": 2048, 00:13:31.742 "data_size": 63488 00:13:31.742 } 00:13:31.742 ] 00:13:31.742 }' 00:13:31.742 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:31.742 21:12:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:31.999 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:31.999 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:32.257 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:13:32.257 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:13:32.514 [2024-07-14 21:12:43.860148] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:13:32.514 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:32.514 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:32.514 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:32.514 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:32.514 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:32.514 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:32.514 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:32.514 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:32.514 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:32.514 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:32.514 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:32.514 21:12:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:32.772 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:32.772 "name": "Existed_Raid", 00:13:32.772 "uuid": "cdd0096c-4225-11ef-aa83-81fbc7dfef58", 00:13:32.772 "strip_size_kb": 64, 00:13:32.772 "state": "configuring", 00:13:32.772 "raid_level": "raid0", 00:13:32.772 "superblock": true, 00:13:32.772 "num_base_bdevs": 4, 00:13:32.772 "num_base_bdevs_discovered": 2, 00:13:32.772 "num_base_bdevs_operational": 4, 00:13:32.772 "base_bdevs_list": [ 00:13:32.772 { 00:13:32.772 "name": "BaseBdev1", 00:13:32.772 "uuid": "ceea0696-4225-11ef-aa83-81fbc7dfef58", 00:13:32.772 "is_configured": true, 00:13:32.772 "data_offset": 2048, 00:13:32.772 "data_size": 63488 00:13:32.772 }, 00:13:32.772 { 00:13:32.772 "name": null, 00:13:32.772 "uuid": "cc916d61-4225-11ef-aa83-81fbc7dfef58", 00:13:32.772 "is_configured": false, 00:13:32.772 "data_offset": 2048, 00:13:32.772 "data_size": 63488 00:13:32.772 }, 00:13:32.772 { 00:13:32.772 "name": null, 00:13:32.772 "uuid": "ccf9cc94-4225-11ef-aa83-81fbc7dfef58", 00:13:32.772 "is_configured": false, 00:13:32.772 "data_offset": 2048, 00:13:32.772 "data_size": 63488 00:13:32.772 }, 00:13:32.772 { 00:13:32.772 "name": "BaseBdev4", 00:13:32.772 "uuid": "cd5fbad4-4225-11ef-aa83-81fbc7dfef58", 00:13:32.772 "is_configured": true, 00:13:32.772 "data_offset": 2048, 00:13:32.772 "data_size": 63488 00:13:32.772 } 00:13:32.772 ] 00:13:32.772 }' 00:13:32.772 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:32.772 21:12:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:33.031 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.031 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:33.290 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:13:33.290 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:13:33.548 [2024-07-14 21:12:44.924181] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:33.548 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:33.548 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:33.548 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:33.548 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:33.548 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:33.548 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:33.548 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:33.548 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:33.548 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:33.548 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:33.548 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:33.548 21:12:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:33.806 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:33.806 "name": "Existed_Raid", 00:13:33.806 "uuid": "cdd0096c-4225-11ef-aa83-81fbc7dfef58", 00:13:33.806 "strip_size_kb": 64, 00:13:33.806 "state": "configuring", 00:13:33.806 "raid_level": "raid0", 00:13:33.806 "superblock": true, 00:13:33.806 "num_base_bdevs": 4, 00:13:33.806 "num_base_bdevs_discovered": 3, 00:13:33.806 "num_base_bdevs_operational": 4, 00:13:33.806 "base_bdevs_list": [ 00:13:33.806 { 00:13:33.806 "name": "BaseBdev1", 00:13:33.806 "uuid": "ceea0696-4225-11ef-aa83-81fbc7dfef58", 00:13:33.806 "is_configured": true, 00:13:33.806 "data_offset": 2048, 00:13:33.806 "data_size": 63488 00:13:33.806 }, 00:13:33.806 { 00:13:33.806 "name": null, 00:13:33.806 "uuid": "cc916d61-4225-11ef-aa83-81fbc7dfef58", 00:13:33.806 "is_configured": false, 00:13:33.806 "data_offset": 2048, 00:13:33.806 "data_size": 63488 00:13:33.806 }, 00:13:33.806 { 00:13:33.806 "name": "BaseBdev3", 00:13:33.806 "uuid": "ccf9cc94-4225-11ef-aa83-81fbc7dfef58", 00:13:33.806 "is_configured": true, 00:13:33.806 "data_offset": 2048, 00:13:33.806 "data_size": 63488 00:13:33.806 }, 00:13:33.806 { 00:13:33.806 "name": "BaseBdev4", 00:13:33.806 "uuid": "cd5fbad4-4225-11ef-aa83-81fbc7dfef58", 00:13:33.806 "is_configured": true, 00:13:33.806 "data_offset": 2048, 00:13:33.806 "data_size": 63488 00:13:33.806 } 00:13:33.806 ] 00:13:33.806 }' 00:13:33.806 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:33.806 21:12:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:34.065 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.065 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:13:34.323 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:13:34.323 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:13:34.581 [2024-07-14 21:12:45.896201] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:13:34.581 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:34.581 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:34.581 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:34.581 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:34.581 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:34.581 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:34.581 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:34.581 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:34.581 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:34.581 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:34.581 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:34.581 21:12:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:34.839 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:34.839 "name": "Existed_Raid", 00:13:34.839 "uuid": "cdd0096c-4225-11ef-aa83-81fbc7dfef58", 00:13:34.839 "strip_size_kb": 64, 00:13:34.839 "state": "configuring", 00:13:34.839 "raid_level": "raid0", 00:13:34.839 "superblock": true, 00:13:34.839 "num_base_bdevs": 4, 00:13:34.839 "num_base_bdevs_discovered": 2, 00:13:34.839 "num_base_bdevs_operational": 4, 00:13:34.839 "base_bdevs_list": [ 00:13:34.839 { 00:13:34.839 "name": null, 00:13:34.840 "uuid": "ceea0696-4225-11ef-aa83-81fbc7dfef58", 00:13:34.840 "is_configured": false, 00:13:34.840 "data_offset": 2048, 00:13:34.840 "data_size": 63488 00:13:34.840 }, 00:13:34.840 { 00:13:34.840 "name": null, 00:13:34.840 "uuid": "cc916d61-4225-11ef-aa83-81fbc7dfef58", 00:13:34.840 "is_configured": false, 00:13:34.840 "data_offset": 2048, 00:13:34.840 "data_size": 63488 00:13:34.840 }, 00:13:34.840 { 00:13:34.840 "name": "BaseBdev3", 00:13:34.840 "uuid": "ccf9cc94-4225-11ef-aa83-81fbc7dfef58", 00:13:34.840 "is_configured": true, 00:13:34.840 "data_offset": 2048, 00:13:34.840 "data_size": 63488 00:13:34.840 }, 00:13:34.840 { 00:13:34.840 "name": "BaseBdev4", 00:13:34.840 "uuid": "cd5fbad4-4225-11ef-aa83-81fbc7dfef58", 00:13:34.840 "is_configured": true, 00:13:34.840 "data_offset": 2048, 00:13:34.840 "data_size": 63488 00:13:34.840 } 00:13:34.840 ] 00:13:34.840 }' 00:13:34.840 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:34.840 21:12:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:35.098 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.098 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:13:35.356 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:13:35.356 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:13:35.615 [2024-07-14 21:12:46.908422] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:35.615 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid0 64 4 00:13:35.615 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:35.615 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:35.615 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:35.615 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:35.615 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:35.615 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:35.615 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:35.615 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:35.615 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:35.615 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:35.615 21:12:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:35.873 21:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:35.873 "name": "Existed_Raid", 00:13:35.873 "uuid": "cdd0096c-4225-11ef-aa83-81fbc7dfef58", 00:13:35.873 "strip_size_kb": 64, 00:13:35.873 "state": "configuring", 00:13:35.873 "raid_level": "raid0", 00:13:35.873 "superblock": true, 00:13:35.873 "num_base_bdevs": 4, 00:13:35.873 "num_base_bdevs_discovered": 3, 00:13:35.873 "num_base_bdevs_operational": 4, 00:13:35.873 "base_bdevs_list": [ 00:13:35.873 { 00:13:35.873 "name": null, 00:13:35.873 "uuid": "ceea0696-4225-11ef-aa83-81fbc7dfef58", 00:13:35.873 "is_configured": false, 00:13:35.873 "data_offset": 2048, 00:13:35.873 "data_size": 63488 00:13:35.873 }, 00:13:35.873 { 00:13:35.873 "name": "BaseBdev2", 00:13:35.873 "uuid": "cc916d61-4225-11ef-aa83-81fbc7dfef58", 00:13:35.873 "is_configured": true, 00:13:35.873 "data_offset": 2048, 00:13:35.873 "data_size": 63488 00:13:35.873 }, 00:13:35.873 { 00:13:35.873 "name": "BaseBdev3", 00:13:35.873 "uuid": "ccf9cc94-4225-11ef-aa83-81fbc7dfef58", 00:13:35.873 "is_configured": true, 00:13:35.873 "data_offset": 2048, 00:13:35.873 "data_size": 63488 00:13:35.873 }, 00:13:35.873 { 00:13:35.873 "name": "BaseBdev4", 00:13:35.873 "uuid": "cd5fbad4-4225-11ef-aa83-81fbc7dfef58", 00:13:35.873 "is_configured": true, 00:13:35.873 "data_offset": 2048, 00:13:35.873 "data_size": 63488 00:13:35.873 } 00:13:35.873 ] 00:13:35.873 }' 00:13:35.873 21:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:35.873 21:12:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:36.140 21:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:13:36.140 21:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.412 21:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:13:36.412 21:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:36.412 21:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:13:36.672 21:12:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ceea0696-4225-11ef-aa83-81fbc7dfef58 00:13:36.672 [2024-07-14 21:12:48.160565] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:13:36.672 [2024-07-14 21:12:48.160621] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x341b58434f00 00:13:36.672 [2024-07-14 21:12:48.160626] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:36.672 [2024-07-14 21:12:48.160644] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x341b58497e20 00:13:36.672 [2024-07-14 21:12:48.160692] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x341b58434f00 00:13:36.672 [2024-07-14 21:12:48.160696] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x341b58434f00 00:13:36.672 [2024-07-14 21:12:48.160715] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:36.672 NewBaseBdev 00:13:36.672 21:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:13:36.672 21:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:13:36.672 21:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:13:36.672 21:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:13:36.672 21:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:13:36.672 21:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:13:36.672 21:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:13:36.931 21:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:13:37.189 [ 00:13:37.189 { 00:13:37.189 "name": "NewBaseBdev", 00:13:37.189 "aliases": [ 00:13:37.189 "ceea0696-4225-11ef-aa83-81fbc7dfef58" 00:13:37.189 ], 00:13:37.189 "product_name": "Malloc disk", 00:13:37.189 "block_size": 512, 00:13:37.189 "num_blocks": 65536, 00:13:37.189 "uuid": "ceea0696-4225-11ef-aa83-81fbc7dfef58", 00:13:37.189 "assigned_rate_limits": { 00:13:37.189 "rw_ios_per_sec": 0, 00:13:37.189 "rw_mbytes_per_sec": 0, 00:13:37.189 "r_mbytes_per_sec": 0, 00:13:37.189 "w_mbytes_per_sec": 0 00:13:37.189 }, 00:13:37.189 "claimed": true, 00:13:37.189 "claim_type": "exclusive_write", 00:13:37.189 "zoned": false, 00:13:37.190 "supported_io_types": { 00:13:37.190 "read": true, 00:13:37.190 "write": true, 00:13:37.190 "unmap": true, 00:13:37.190 "flush": true, 00:13:37.190 "reset": true, 00:13:37.190 "nvme_admin": false, 00:13:37.190 "nvme_io": false, 00:13:37.190 "nvme_io_md": false, 00:13:37.190 "write_zeroes": true, 00:13:37.190 "zcopy": true, 00:13:37.190 "get_zone_info": false, 00:13:37.190 "zone_management": false, 00:13:37.190 "zone_append": false, 00:13:37.190 "compare": false, 00:13:37.190 "compare_and_write": false, 00:13:37.190 "abort": true, 00:13:37.190 "seek_hole": false, 00:13:37.190 "seek_data": false, 00:13:37.190 "copy": true, 00:13:37.190 "nvme_iov_md": false 00:13:37.190 }, 00:13:37.190 "memory_domains": [ 00:13:37.190 { 00:13:37.190 "dma_device_id": "system", 00:13:37.190 "dma_device_type": 1 00:13:37.190 }, 00:13:37.190 { 00:13:37.190 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.190 "dma_device_type": 2 00:13:37.190 } 00:13:37.190 ], 00:13:37.190 "driver_specific": {} 00:13:37.190 } 00:13:37.190 ] 00:13:37.190 21:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:13:37.190 21:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid0 64 4 00:13:37.190 21:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:13:37.190 21:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:37.190 21:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:37.190 21:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:37.190 21:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:37.190 21:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:37.190 21:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:37.190 21:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:37.190 21:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:37.190 21:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:37.190 21:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:13:37.448 21:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:37.448 "name": "Existed_Raid", 00:13:37.448 "uuid": "cdd0096c-4225-11ef-aa83-81fbc7dfef58", 00:13:37.448 "strip_size_kb": 64, 00:13:37.448 "state": "online", 00:13:37.448 "raid_level": "raid0", 00:13:37.448 "superblock": true, 00:13:37.448 "num_base_bdevs": 4, 00:13:37.448 "num_base_bdevs_discovered": 4, 00:13:37.448 "num_base_bdevs_operational": 4, 00:13:37.448 "base_bdevs_list": [ 00:13:37.448 { 00:13:37.448 "name": "NewBaseBdev", 00:13:37.448 "uuid": "ceea0696-4225-11ef-aa83-81fbc7dfef58", 00:13:37.448 "is_configured": true, 00:13:37.448 "data_offset": 2048, 00:13:37.448 "data_size": 63488 00:13:37.448 }, 00:13:37.448 { 00:13:37.448 "name": "BaseBdev2", 00:13:37.448 "uuid": "cc916d61-4225-11ef-aa83-81fbc7dfef58", 00:13:37.448 "is_configured": true, 00:13:37.448 "data_offset": 2048, 00:13:37.448 "data_size": 63488 00:13:37.448 }, 00:13:37.448 { 00:13:37.448 "name": "BaseBdev3", 00:13:37.448 "uuid": "ccf9cc94-4225-11ef-aa83-81fbc7dfef58", 00:13:37.448 "is_configured": true, 00:13:37.448 "data_offset": 2048, 00:13:37.448 "data_size": 63488 00:13:37.448 }, 00:13:37.448 { 00:13:37.448 "name": "BaseBdev4", 00:13:37.448 "uuid": "cd5fbad4-4225-11ef-aa83-81fbc7dfef58", 00:13:37.448 "is_configured": true, 00:13:37.448 "data_offset": 2048, 00:13:37.448 "data_size": 63488 00:13:37.448 } 00:13:37.448 ] 00:13:37.448 }' 00:13:37.448 21:12:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:37.448 21:12:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:37.707 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:13:37.707 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:13:37.707 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:37.707 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:37.707 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:37.707 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:13:37.707 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:13:37.707 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:37.966 [2024-07-14 21:12:49.384498] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:37.966 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:37.966 "name": "Existed_Raid", 00:13:37.966 "aliases": [ 00:13:37.966 "cdd0096c-4225-11ef-aa83-81fbc7dfef58" 00:13:37.966 ], 00:13:37.966 "product_name": "Raid Volume", 00:13:37.966 "block_size": 512, 00:13:37.966 "num_blocks": 253952, 00:13:37.966 "uuid": "cdd0096c-4225-11ef-aa83-81fbc7dfef58", 00:13:37.966 "assigned_rate_limits": { 00:13:37.966 "rw_ios_per_sec": 0, 00:13:37.966 "rw_mbytes_per_sec": 0, 00:13:37.966 "r_mbytes_per_sec": 0, 00:13:37.966 "w_mbytes_per_sec": 0 00:13:37.966 }, 00:13:37.966 "claimed": false, 00:13:37.966 "zoned": false, 00:13:37.966 "supported_io_types": { 00:13:37.966 "read": true, 00:13:37.966 "write": true, 00:13:37.966 "unmap": true, 00:13:37.966 "flush": true, 00:13:37.966 "reset": true, 00:13:37.966 "nvme_admin": false, 00:13:37.966 "nvme_io": false, 00:13:37.966 "nvme_io_md": false, 00:13:37.966 "write_zeroes": true, 00:13:37.966 "zcopy": false, 00:13:37.966 "get_zone_info": false, 00:13:37.966 "zone_management": false, 00:13:37.966 "zone_append": false, 00:13:37.966 "compare": false, 00:13:37.966 "compare_and_write": false, 00:13:37.966 "abort": false, 00:13:37.966 "seek_hole": false, 00:13:37.966 "seek_data": false, 00:13:37.966 "copy": false, 00:13:37.966 "nvme_iov_md": false 00:13:37.966 }, 00:13:37.966 "memory_domains": [ 00:13:37.966 { 00:13:37.966 "dma_device_id": "system", 00:13:37.966 "dma_device_type": 1 00:13:37.966 }, 00:13:37.966 { 00:13:37.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.966 "dma_device_type": 2 00:13:37.966 }, 00:13:37.966 { 00:13:37.966 "dma_device_id": "system", 00:13:37.966 "dma_device_type": 1 00:13:37.966 }, 00:13:37.966 { 00:13:37.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.966 "dma_device_type": 2 00:13:37.966 }, 00:13:37.966 { 00:13:37.966 "dma_device_id": "system", 00:13:37.966 "dma_device_type": 1 00:13:37.966 }, 00:13:37.966 { 00:13:37.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.966 "dma_device_type": 2 00:13:37.966 }, 00:13:37.966 { 00:13:37.966 "dma_device_id": "system", 00:13:37.966 "dma_device_type": 1 00:13:37.966 }, 00:13:37.966 { 00:13:37.966 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:37.966 "dma_device_type": 2 00:13:37.966 } 00:13:37.966 ], 00:13:37.966 "driver_specific": { 00:13:37.966 "raid": { 00:13:37.966 "uuid": "cdd0096c-4225-11ef-aa83-81fbc7dfef58", 00:13:37.966 "strip_size_kb": 64, 00:13:37.966 "state": "online", 00:13:37.966 "raid_level": "raid0", 00:13:37.966 "superblock": true, 00:13:37.966 "num_base_bdevs": 4, 00:13:37.966 "num_base_bdevs_discovered": 4, 00:13:37.966 "num_base_bdevs_operational": 4, 00:13:37.966 "base_bdevs_list": [ 00:13:37.966 { 00:13:37.966 "name": "NewBaseBdev", 00:13:37.966 "uuid": "ceea0696-4225-11ef-aa83-81fbc7dfef58", 00:13:37.966 "is_configured": true, 00:13:37.966 "data_offset": 2048, 00:13:37.966 "data_size": 63488 00:13:37.966 }, 00:13:37.966 { 00:13:37.966 "name": "BaseBdev2", 00:13:37.966 "uuid": "cc916d61-4225-11ef-aa83-81fbc7dfef58", 00:13:37.966 "is_configured": true, 00:13:37.966 "data_offset": 2048, 00:13:37.966 "data_size": 63488 00:13:37.966 }, 00:13:37.966 { 00:13:37.966 "name": "BaseBdev3", 00:13:37.966 "uuid": "ccf9cc94-4225-11ef-aa83-81fbc7dfef58", 00:13:37.966 "is_configured": true, 00:13:37.966 "data_offset": 2048, 00:13:37.966 "data_size": 63488 00:13:37.966 }, 00:13:37.966 { 00:13:37.966 "name": "BaseBdev4", 00:13:37.966 "uuid": "cd5fbad4-4225-11ef-aa83-81fbc7dfef58", 00:13:37.966 "is_configured": true, 00:13:37.966 "data_offset": 2048, 00:13:37.966 "data_size": 63488 00:13:37.966 } 00:13:37.966 ] 00:13:37.966 } 00:13:37.966 } 00:13:37.966 }' 00:13:37.966 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:37.966 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:13:37.966 BaseBdev2 00:13:37.966 BaseBdev3 00:13:37.966 BaseBdev4' 00:13:37.966 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:37.966 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:13:37.966 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:38.224 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:38.224 "name": "NewBaseBdev", 00:13:38.224 "aliases": [ 00:13:38.224 "ceea0696-4225-11ef-aa83-81fbc7dfef58" 00:13:38.224 ], 00:13:38.224 "product_name": "Malloc disk", 00:13:38.224 "block_size": 512, 00:13:38.224 "num_blocks": 65536, 00:13:38.224 "uuid": "ceea0696-4225-11ef-aa83-81fbc7dfef58", 00:13:38.224 "assigned_rate_limits": { 00:13:38.224 "rw_ios_per_sec": 0, 00:13:38.224 "rw_mbytes_per_sec": 0, 00:13:38.224 "r_mbytes_per_sec": 0, 00:13:38.224 "w_mbytes_per_sec": 0 00:13:38.224 }, 00:13:38.224 "claimed": true, 00:13:38.224 "claim_type": "exclusive_write", 00:13:38.224 "zoned": false, 00:13:38.224 "supported_io_types": { 00:13:38.224 "read": true, 00:13:38.224 "write": true, 00:13:38.224 "unmap": true, 00:13:38.224 "flush": true, 00:13:38.224 "reset": true, 00:13:38.224 "nvme_admin": false, 00:13:38.224 "nvme_io": false, 00:13:38.224 "nvme_io_md": false, 00:13:38.224 "write_zeroes": true, 00:13:38.224 "zcopy": true, 00:13:38.224 "get_zone_info": false, 00:13:38.224 "zone_management": false, 00:13:38.224 "zone_append": false, 00:13:38.224 "compare": false, 00:13:38.224 "compare_and_write": false, 00:13:38.224 "abort": true, 00:13:38.224 "seek_hole": false, 00:13:38.224 "seek_data": false, 00:13:38.224 "copy": true, 00:13:38.224 "nvme_iov_md": false 00:13:38.224 }, 00:13:38.224 "memory_domains": [ 00:13:38.224 { 00:13:38.224 "dma_device_id": "system", 00:13:38.224 "dma_device_type": 1 00:13:38.224 }, 00:13:38.224 { 00:13:38.224 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.224 "dma_device_type": 2 00:13:38.224 } 00:13:38.224 ], 00:13:38.224 "driver_specific": {} 00:13:38.224 }' 00:13:38.224 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:38.225 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:38.225 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:38.225 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:38.225 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:38.225 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:38.225 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:38.225 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:38.225 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:38.225 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:38.225 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:38.225 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:38.225 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:38.225 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:38.225 21:12:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:13:38.483 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:38.483 "name": "BaseBdev2", 00:13:38.483 "aliases": [ 00:13:38.483 "cc916d61-4225-11ef-aa83-81fbc7dfef58" 00:13:38.483 ], 00:13:38.483 "product_name": "Malloc disk", 00:13:38.483 "block_size": 512, 00:13:38.483 "num_blocks": 65536, 00:13:38.483 "uuid": "cc916d61-4225-11ef-aa83-81fbc7dfef58", 00:13:38.483 "assigned_rate_limits": { 00:13:38.483 "rw_ios_per_sec": 0, 00:13:38.483 "rw_mbytes_per_sec": 0, 00:13:38.483 "r_mbytes_per_sec": 0, 00:13:38.483 "w_mbytes_per_sec": 0 00:13:38.483 }, 00:13:38.483 "claimed": true, 00:13:38.483 "claim_type": "exclusive_write", 00:13:38.483 "zoned": false, 00:13:38.483 "supported_io_types": { 00:13:38.483 "read": true, 00:13:38.483 "write": true, 00:13:38.483 "unmap": true, 00:13:38.483 "flush": true, 00:13:38.483 "reset": true, 00:13:38.483 "nvme_admin": false, 00:13:38.483 "nvme_io": false, 00:13:38.483 "nvme_io_md": false, 00:13:38.483 "write_zeroes": true, 00:13:38.483 "zcopy": true, 00:13:38.483 "get_zone_info": false, 00:13:38.483 "zone_management": false, 00:13:38.483 "zone_append": false, 00:13:38.483 "compare": false, 00:13:38.483 "compare_and_write": false, 00:13:38.483 "abort": true, 00:13:38.483 "seek_hole": false, 00:13:38.483 "seek_data": false, 00:13:38.483 "copy": true, 00:13:38.483 "nvme_iov_md": false 00:13:38.483 }, 00:13:38.483 "memory_domains": [ 00:13:38.483 { 00:13:38.483 "dma_device_id": "system", 00:13:38.483 "dma_device_type": 1 00:13:38.483 }, 00:13:38.483 { 00:13:38.483 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:38.483 "dma_device_type": 2 00:13:38.483 } 00:13:38.483 ], 00:13:38.483 "driver_specific": {} 00:13:38.483 }' 00:13:38.483 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:38.483 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:38.742 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:38.742 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:38.742 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:38.742 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:38.742 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:38.742 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:38.742 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:38.742 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:38.742 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:38.742 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:38.742 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:38.742 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:13:38.742 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:39.000 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:39.000 "name": "BaseBdev3", 00:13:39.000 "aliases": [ 00:13:39.000 "ccf9cc94-4225-11ef-aa83-81fbc7dfef58" 00:13:39.000 ], 00:13:39.000 "product_name": "Malloc disk", 00:13:39.000 "block_size": 512, 00:13:39.000 "num_blocks": 65536, 00:13:39.000 "uuid": "ccf9cc94-4225-11ef-aa83-81fbc7dfef58", 00:13:39.000 "assigned_rate_limits": { 00:13:39.000 "rw_ios_per_sec": 0, 00:13:39.000 "rw_mbytes_per_sec": 0, 00:13:39.000 "r_mbytes_per_sec": 0, 00:13:39.000 "w_mbytes_per_sec": 0 00:13:39.000 }, 00:13:39.000 "claimed": true, 00:13:39.000 "claim_type": "exclusive_write", 00:13:39.000 "zoned": false, 00:13:39.000 "supported_io_types": { 00:13:39.000 "read": true, 00:13:39.000 "write": true, 00:13:39.000 "unmap": true, 00:13:39.000 "flush": true, 00:13:39.000 "reset": true, 00:13:39.000 "nvme_admin": false, 00:13:39.000 "nvme_io": false, 00:13:39.000 "nvme_io_md": false, 00:13:39.000 "write_zeroes": true, 00:13:39.000 "zcopy": true, 00:13:39.001 "get_zone_info": false, 00:13:39.001 "zone_management": false, 00:13:39.001 "zone_append": false, 00:13:39.001 "compare": false, 00:13:39.001 "compare_and_write": false, 00:13:39.001 "abort": true, 00:13:39.001 "seek_hole": false, 00:13:39.001 "seek_data": false, 00:13:39.001 "copy": true, 00:13:39.001 "nvme_iov_md": false 00:13:39.001 }, 00:13:39.001 "memory_domains": [ 00:13:39.001 { 00:13:39.001 "dma_device_id": "system", 00:13:39.001 "dma_device_type": 1 00:13:39.001 }, 00:13:39.001 { 00:13:39.001 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.001 "dma_device_type": 2 00:13:39.001 } 00:13:39.001 ], 00:13:39.001 "driver_specific": {} 00:13:39.001 }' 00:13:39.001 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:39.001 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:39.001 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:39.001 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:39.001 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:39.001 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:39.001 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:39.001 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:39.001 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:39.001 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:39.001 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:39.001 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:39.001 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:39.001 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:13:39.001 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:39.258 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:39.258 "name": "BaseBdev4", 00:13:39.258 "aliases": [ 00:13:39.258 "cd5fbad4-4225-11ef-aa83-81fbc7dfef58" 00:13:39.258 ], 00:13:39.258 "product_name": "Malloc disk", 00:13:39.258 "block_size": 512, 00:13:39.258 "num_blocks": 65536, 00:13:39.258 "uuid": "cd5fbad4-4225-11ef-aa83-81fbc7dfef58", 00:13:39.258 "assigned_rate_limits": { 00:13:39.258 "rw_ios_per_sec": 0, 00:13:39.258 "rw_mbytes_per_sec": 0, 00:13:39.258 "r_mbytes_per_sec": 0, 00:13:39.258 "w_mbytes_per_sec": 0 00:13:39.258 }, 00:13:39.258 "claimed": true, 00:13:39.258 "claim_type": "exclusive_write", 00:13:39.258 "zoned": false, 00:13:39.258 "supported_io_types": { 00:13:39.258 "read": true, 00:13:39.258 "write": true, 00:13:39.258 "unmap": true, 00:13:39.258 "flush": true, 00:13:39.258 "reset": true, 00:13:39.258 "nvme_admin": false, 00:13:39.258 "nvme_io": false, 00:13:39.258 "nvme_io_md": false, 00:13:39.258 "write_zeroes": true, 00:13:39.258 "zcopy": true, 00:13:39.258 "get_zone_info": false, 00:13:39.258 "zone_management": false, 00:13:39.258 "zone_append": false, 00:13:39.258 "compare": false, 00:13:39.258 "compare_and_write": false, 00:13:39.258 "abort": true, 00:13:39.258 "seek_hole": false, 00:13:39.258 "seek_data": false, 00:13:39.258 "copy": true, 00:13:39.258 "nvme_iov_md": false 00:13:39.258 }, 00:13:39.258 "memory_domains": [ 00:13:39.258 { 00:13:39.258 "dma_device_id": "system", 00:13:39.258 "dma_device_type": 1 00:13:39.258 }, 00:13:39.258 { 00:13:39.258 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:39.258 "dma_device_type": 2 00:13:39.258 } 00:13:39.258 ], 00:13:39.258 "driver_specific": {} 00:13:39.258 }' 00:13:39.258 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:39.258 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:39.258 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:39.258 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:39.258 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:39.258 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:39.258 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:39.258 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:39.258 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:39.258 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:39.258 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:39.258 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:39.258 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:13:39.517 [2024-07-14 21:12:50.920479] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:13:39.517 [2024-07-14 21:12:50.920499] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:39.517 [2024-07-14 21:12:50.920521] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:39.517 [2024-07-14 21:12:50.920538] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:39.517 [2024-07-14 21:12:50.920542] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x341b58434f00 name Existed_Raid, state offline 00:13:39.517 21:12:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 59105 00:13:39.517 21:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 59105 ']' 00:13:39.517 21:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 59105 00:13:39.517 21:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:13:39.517 21:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:39.517 21:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 59105 00:13:39.517 21:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:13:39.517 21:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:13:39.517 21:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:13:39.517 21:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59105' 00:13:39.517 killing process with pid 59105 00:13:39.517 21:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 59105 00:13:39.517 [2024-07-14 21:12:50.948775] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:39.517 21:12:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 59105 00:13:39.517 [2024-07-14 21:12:50.981976] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:39.775 21:12:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:13:39.775 00:13:39.775 real 0m25.452s 00:13:39.775 user 0m46.281s 00:13:39.775 sys 0m3.722s 00:13:39.775 21:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:39.775 ************************************ 00:13:39.775 END TEST raid_state_function_test_sb 00:13:39.775 ************************************ 00:13:39.775 21:12:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:13:39.775 21:12:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:39.775 21:12:51 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid0 4 00:13:39.775 21:12:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:13:39.775 21:12:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:39.775 21:12:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:39.775 ************************************ 00:13:39.775 START TEST raid_superblock_test 00:13:39.775 ************************************ 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid0 4 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid0 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid0 '!=' raid1 ']' 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:13:39.775 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:13:39.776 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=59915 00:13:39.776 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 59915 /var/tmp/spdk-raid.sock 00:13:39.776 21:12:51 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:13:39.776 21:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 59915 ']' 00:13:39.776 21:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:39.776 21:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:39.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:39.776 21:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:39.776 21:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:39.776 21:12:51 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:39.776 [2024-07-14 21:12:51.285681] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:39.776 [2024-07-14 21:12:51.285954] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:40.343 EAL: TSC is not safe to use in SMP mode 00:13:40.343 EAL: TSC is not invariant 00:13:40.343 [2024-07-14 21:12:51.867924] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:40.602 [2024-07-14 21:12:51.969552] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:40.602 [2024-07-14 21:12:51.972230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:40.602 [2024-07-14 21:12:51.973204] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.602 [2024-07-14 21:12:51.973222] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:40.860 21:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:40.861 21:12:52 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:13:40.861 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:13:40.861 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:40.861 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:13:40.861 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:13:40.861 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:13:40.861 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:40.861 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:40.861 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:40.861 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:13:41.120 malloc1 00:13:41.120 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:41.379 [2024-07-14 21:12:52.755042] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:41.379 [2024-07-14 21:12:52.755113] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.379 [2024-07-14 21:12:52.755140] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x361352e34780 00:13:41.379 [2024-07-14 21:12:52.755147] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.379 [2024-07-14 21:12:52.756232] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.379 [2024-07-14 21:12:52.756271] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:41.379 pt1 00:13:41.379 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:41.379 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:41.379 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:13:41.379 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:13:41.379 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:13:41.379 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:41.379 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:41.379 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:41.379 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:13:41.638 malloc2 00:13:41.638 21:12:52 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:41.895 [2024-07-14 21:12:53.219045] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:41.895 [2024-07-14 21:12:53.219105] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:41.895 [2024-07-14 21:12:53.219133] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x361352e34c80 00:13:41.895 [2024-07-14 21:12:53.219140] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:41.895 [2024-07-14 21:12:53.219827] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:41.895 [2024-07-14 21:12:53.219849] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:41.895 pt2 00:13:41.895 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:41.895 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:41.895 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:13:41.895 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:13:41.895 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:13:41.895 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:41.895 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:41.895 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:41.895 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:13:42.152 malloc3 00:13:42.152 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:42.152 [2024-07-14 21:12:53.663050] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:42.152 [2024-07-14 21:12:53.663112] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.152 [2024-07-14 21:12:53.663138] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x361352e35180 00:13:42.152 [2024-07-14 21:12:53.663145] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.152 [2024-07-14 21:12:53.663835] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.152 [2024-07-14 21:12:53.663859] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:42.152 pt3 00:13:42.152 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:42.152 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:42.152 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:13:42.152 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:13:42.152 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:13:42.152 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:13:42.152 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:13:42.152 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:13:42.152 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:13:42.410 malloc4 00:13:42.410 21:12:53 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:42.752 [2024-07-14 21:12:54.123058] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:42.752 [2024-07-14 21:12:54.123128] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:42.752 [2024-07-14 21:12:54.123154] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x361352e35680 00:13:42.752 [2024-07-14 21:12:54.123161] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:42.752 [2024-07-14 21:12:54.123716] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:42.752 [2024-07-14 21:12:54.123756] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:42.752 pt4 00:13:42.752 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:13:42.752 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:13:42.752 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:13:43.019 [2024-07-14 21:12:54.347082] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:43.019 [2024-07-14 21:12:54.347701] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:43.019 [2024-07-14 21:12:54.347723] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:43.019 [2024-07-14 21:12:54.347734] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:43.019 [2024-07-14 21:12:54.347799] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x361352e35900 00:13:43.019 [2024-07-14 21:12:54.347805] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:43.019 [2024-07-14 21:12:54.347863] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x361352e97e20 00:13:43.019 [2024-07-14 21:12:54.347937] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x361352e35900 00:13:43.019 [2024-07-14 21:12:54.347941] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x361352e35900 00:13:43.019 [2024-07-14 21:12:54.347968] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:43.019 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:43.019 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:43.019 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:43.020 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:43.020 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:43.020 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:43.020 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:43.020 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:43.020 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:43.020 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:43.020 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:43.020 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:43.278 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:43.278 "name": "raid_bdev1", 00:13:43.278 "uuid": "d602448c-4225-11ef-aa83-81fbc7dfef58", 00:13:43.278 "strip_size_kb": 64, 00:13:43.278 "state": "online", 00:13:43.278 "raid_level": "raid0", 00:13:43.278 "superblock": true, 00:13:43.278 "num_base_bdevs": 4, 00:13:43.278 "num_base_bdevs_discovered": 4, 00:13:43.278 "num_base_bdevs_operational": 4, 00:13:43.278 "base_bdevs_list": [ 00:13:43.278 { 00:13:43.278 "name": "pt1", 00:13:43.278 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:43.278 "is_configured": true, 00:13:43.278 "data_offset": 2048, 00:13:43.278 "data_size": 63488 00:13:43.278 }, 00:13:43.278 { 00:13:43.278 "name": "pt2", 00:13:43.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.278 "is_configured": true, 00:13:43.278 "data_offset": 2048, 00:13:43.278 "data_size": 63488 00:13:43.278 }, 00:13:43.278 { 00:13:43.278 "name": "pt3", 00:13:43.278 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.278 "is_configured": true, 00:13:43.278 "data_offset": 2048, 00:13:43.278 "data_size": 63488 00:13:43.278 }, 00:13:43.278 { 00:13:43.278 "name": "pt4", 00:13:43.278 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:43.278 "is_configured": true, 00:13:43.278 "data_offset": 2048, 00:13:43.278 "data_size": 63488 00:13:43.278 } 00:13:43.278 ] 00:13:43.278 }' 00:13:43.278 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:43.278 21:12:54 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:43.536 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:13:43.536 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:43.536 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:43.536 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:43.536 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:43.536 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:43.536 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:43.536 21:12:54 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:43.795 [2024-07-14 21:12:55.127213] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:43.795 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:43.795 "name": "raid_bdev1", 00:13:43.795 "aliases": [ 00:13:43.795 "d602448c-4225-11ef-aa83-81fbc7dfef58" 00:13:43.795 ], 00:13:43.795 "product_name": "Raid Volume", 00:13:43.795 "block_size": 512, 00:13:43.795 "num_blocks": 253952, 00:13:43.795 "uuid": "d602448c-4225-11ef-aa83-81fbc7dfef58", 00:13:43.795 "assigned_rate_limits": { 00:13:43.795 "rw_ios_per_sec": 0, 00:13:43.795 "rw_mbytes_per_sec": 0, 00:13:43.795 "r_mbytes_per_sec": 0, 00:13:43.795 "w_mbytes_per_sec": 0 00:13:43.795 }, 00:13:43.795 "claimed": false, 00:13:43.795 "zoned": false, 00:13:43.795 "supported_io_types": { 00:13:43.795 "read": true, 00:13:43.795 "write": true, 00:13:43.795 "unmap": true, 00:13:43.795 "flush": true, 00:13:43.795 "reset": true, 00:13:43.795 "nvme_admin": false, 00:13:43.795 "nvme_io": false, 00:13:43.795 "nvme_io_md": false, 00:13:43.795 "write_zeroes": true, 00:13:43.795 "zcopy": false, 00:13:43.795 "get_zone_info": false, 00:13:43.795 "zone_management": false, 00:13:43.795 "zone_append": false, 00:13:43.795 "compare": false, 00:13:43.795 "compare_and_write": false, 00:13:43.795 "abort": false, 00:13:43.795 "seek_hole": false, 00:13:43.795 "seek_data": false, 00:13:43.795 "copy": false, 00:13:43.795 "nvme_iov_md": false 00:13:43.795 }, 00:13:43.795 "memory_domains": [ 00:13:43.795 { 00:13:43.795 "dma_device_id": "system", 00:13:43.795 "dma_device_type": 1 00:13:43.795 }, 00:13:43.795 { 00:13:43.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.795 "dma_device_type": 2 00:13:43.795 }, 00:13:43.795 { 00:13:43.795 "dma_device_id": "system", 00:13:43.795 "dma_device_type": 1 00:13:43.795 }, 00:13:43.795 { 00:13:43.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.795 "dma_device_type": 2 00:13:43.795 }, 00:13:43.795 { 00:13:43.795 "dma_device_id": "system", 00:13:43.795 "dma_device_type": 1 00:13:43.795 }, 00:13:43.795 { 00:13:43.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.795 "dma_device_type": 2 00:13:43.795 }, 00:13:43.795 { 00:13:43.795 "dma_device_id": "system", 00:13:43.795 "dma_device_type": 1 00:13:43.795 }, 00:13:43.795 { 00:13:43.795 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:43.795 "dma_device_type": 2 00:13:43.795 } 00:13:43.795 ], 00:13:43.795 "driver_specific": { 00:13:43.795 "raid": { 00:13:43.795 "uuid": "d602448c-4225-11ef-aa83-81fbc7dfef58", 00:13:43.795 "strip_size_kb": 64, 00:13:43.795 "state": "online", 00:13:43.795 "raid_level": "raid0", 00:13:43.795 "superblock": true, 00:13:43.795 "num_base_bdevs": 4, 00:13:43.795 "num_base_bdevs_discovered": 4, 00:13:43.795 "num_base_bdevs_operational": 4, 00:13:43.795 "base_bdevs_list": [ 00:13:43.795 { 00:13:43.795 "name": "pt1", 00:13:43.795 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:43.795 "is_configured": true, 00:13:43.795 "data_offset": 2048, 00:13:43.795 "data_size": 63488 00:13:43.795 }, 00:13:43.795 { 00:13:43.795 "name": "pt2", 00:13:43.795 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:43.795 "is_configured": true, 00:13:43.795 "data_offset": 2048, 00:13:43.795 "data_size": 63488 00:13:43.795 }, 00:13:43.795 { 00:13:43.795 "name": "pt3", 00:13:43.795 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:43.795 "is_configured": true, 00:13:43.795 "data_offset": 2048, 00:13:43.795 "data_size": 63488 00:13:43.795 }, 00:13:43.795 { 00:13:43.795 "name": "pt4", 00:13:43.795 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:43.795 "is_configured": true, 00:13:43.795 "data_offset": 2048, 00:13:43.795 "data_size": 63488 00:13:43.795 } 00:13:43.795 ] 00:13:43.795 } 00:13:43.795 } 00:13:43.795 }' 00:13:43.795 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:43.795 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:43.795 pt2 00:13:43.795 pt3 00:13:43.795 pt4' 00:13:43.795 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:43.795 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:43.795 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:44.054 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:44.054 "name": "pt1", 00:13:44.054 "aliases": [ 00:13:44.054 "00000000-0000-0000-0000-000000000001" 00:13:44.054 ], 00:13:44.054 "product_name": "passthru", 00:13:44.054 "block_size": 512, 00:13:44.054 "num_blocks": 65536, 00:13:44.054 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:44.054 "assigned_rate_limits": { 00:13:44.054 "rw_ios_per_sec": 0, 00:13:44.054 "rw_mbytes_per_sec": 0, 00:13:44.054 "r_mbytes_per_sec": 0, 00:13:44.054 "w_mbytes_per_sec": 0 00:13:44.054 }, 00:13:44.054 "claimed": true, 00:13:44.054 "claim_type": "exclusive_write", 00:13:44.054 "zoned": false, 00:13:44.054 "supported_io_types": { 00:13:44.054 "read": true, 00:13:44.054 "write": true, 00:13:44.054 "unmap": true, 00:13:44.054 "flush": true, 00:13:44.054 "reset": true, 00:13:44.054 "nvme_admin": false, 00:13:44.054 "nvme_io": false, 00:13:44.054 "nvme_io_md": false, 00:13:44.055 "write_zeroes": true, 00:13:44.055 "zcopy": true, 00:13:44.055 "get_zone_info": false, 00:13:44.055 "zone_management": false, 00:13:44.055 "zone_append": false, 00:13:44.055 "compare": false, 00:13:44.055 "compare_and_write": false, 00:13:44.055 "abort": true, 00:13:44.055 "seek_hole": false, 00:13:44.055 "seek_data": false, 00:13:44.055 "copy": true, 00:13:44.055 "nvme_iov_md": false 00:13:44.055 }, 00:13:44.055 "memory_domains": [ 00:13:44.055 { 00:13:44.055 "dma_device_id": "system", 00:13:44.055 "dma_device_type": 1 00:13:44.055 }, 00:13:44.055 { 00:13:44.055 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.055 "dma_device_type": 2 00:13:44.055 } 00:13:44.055 ], 00:13:44.055 "driver_specific": { 00:13:44.055 "passthru": { 00:13:44.055 "name": "pt1", 00:13:44.055 "base_bdev_name": "malloc1" 00:13:44.055 } 00:13:44.055 } 00:13:44.055 }' 00:13:44.055 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.055 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.055 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:44.055 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.055 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.055 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:44.055 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.055 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.055 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:44.055 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.055 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.055 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:44.055 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:44.055 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:44.055 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:44.314 "name": "pt2", 00:13:44.314 "aliases": [ 00:13:44.314 "00000000-0000-0000-0000-000000000002" 00:13:44.314 ], 00:13:44.314 "product_name": "passthru", 00:13:44.314 "block_size": 512, 00:13:44.314 "num_blocks": 65536, 00:13:44.314 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:44.314 "assigned_rate_limits": { 00:13:44.314 "rw_ios_per_sec": 0, 00:13:44.314 "rw_mbytes_per_sec": 0, 00:13:44.314 "r_mbytes_per_sec": 0, 00:13:44.314 "w_mbytes_per_sec": 0 00:13:44.314 }, 00:13:44.314 "claimed": true, 00:13:44.314 "claim_type": "exclusive_write", 00:13:44.314 "zoned": false, 00:13:44.314 "supported_io_types": { 00:13:44.314 "read": true, 00:13:44.314 "write": true, 00:13:44.314 "unmap": true, 00:13:44.314 "flush": true, 00:13:44.314 "reset": true, 00:13:44.314 "nvme_admin": false, 00:13:44.314 "nvme_io": false, 00:13:44.314 "nvme_io_md": false, 00:13:44.314 "write_zeroes": true, 00:13:44.314 "zcopy": true, 00:13:44.314 "get_zone_info": false, 00:13:44.314 "zone_management": false, 00:13:44.314 "zone_append": false, 00:13:44.314 "compare": false, 00:13:44.314 "compare_and_write": false, 00:13:44.314 "abort": true, 00:13:44.314 "seek_hole": false, 00:13:44.314 "seek_data": false, 00:13:44.314 "copy": true, 00:13:44.314 "nvme_iov_md": false 00:13:44.314 }, 00:13:44.314 "memory_domains": [ 00:13:44.314 { 00:13:44.314 "dma_device_id": "system", 00:13:44.314 "dma_device_type": 1 00:13:44.314 }, 00:13:44.314 { 00:13:44.314 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.314 "dma_device_type": 2 00:13:44.314 } 00:13:44.314 ], 00:13:44.314 "driver_specific": { 00:13:44.314 "passthru": { 00:13:44.314 "name": "pt2", 00:13:44.314 "base_bdev_name": "malloc2" 00:13:44.314 } 00:13:44.314 } 00:13:44.314 }' 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:44.314 21:12:55 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:44.573 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:44.573 "name": "pt3", 00:13:44.573 "aliases": [ 00:13:44.573 "00000000-0000-0000-0000-000000000003" 00:13:44.573 ], 00:13:44.573 "product_name": "passthru", 00:13:44.573 "block_size": 512, 00:13:44.573 "num_blocks": 65536, 00:13:44.573 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:44.573 "assigned_rate_limits": { 00:13:44.573 "rw_ios_per_sec": 0, 00:13:44.573 "rw_mbytes_per_sec": 0, 00:13:44.573 "r_mbytes_per_sec": 0, 00:13:44.573 "w_mbytes_per_sec": 0 00:13:44.573 }, 00:13:44.573 "claimed": true, 00:13:44.573 "claim_type": "exclusive_write", 00:13:44.573 "zoned": false, 00:13:44.573 "supported_io_types": { 00:13:44.573 "read": true, 00:13:44.573 "write": true, 00:13:44.573 "unmap": true, 00:13:44.573 "flush": true, 00:13:44.573 "reset": true, 00:13:44.573 "nvme_admin": false, 00:13:44.573 "nvme_io": false, 00:13:44.573 "nvme_io_md": false, 00:13:44.573 "write_zeroes": true, 00:13:44.573 "zcopy": true, 00:13:44.573 "get_zone_info": false, 00:13:44.573 "zone_management": false, 00:13:44.573 "zone_append": false, 00:13:44.573 "compare": false, 00:13:44.573 "compare_and_write": false, 00:13:44.573 "abort": true, 00:13:44.573 "seek_hole": false, 00:13:44.573 "seek_data": false, 00:13:44.573 "copy": true, 00:13:44.573 "nvme_iov_md": false 00:13:44.573 }, 00:13:44.573 "memory_domains": [ 00:13:44.573 { 00:13:44.573 "dma_device_id": "system", 00:13:44.573 "dma_device_type": 1 00:13:44.573 }, 00:13:44.573 { 00:13:44.573 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.573 "dma_device_type": 2 00:13:44.573 } 00:13:44.573 ], 00:13:44.573 "driver_specific": { 00:13:44.573 "passthru": { 00:13:44.573 "name": "pt3", 00:13:44.573 "base_bdev_name": "malloc3" 00:13:44.573 } 00:13:44.573 } 00:13:44.573 }' 00:13:44.573 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.573 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.573 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:44.573 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.573 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.573 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:44.573 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.574 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.574 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:44.574 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.574 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.574 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:44.574 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:44.574 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:13:44.574 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:44.833 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:44.833 "name": "pt4", 00:13:44.833 "aliases": [ 00:13:44.833 "00000000-0000-0000-0000-000000000004" 00:13:44.833 ], 00:13:44.833 "product_name": "passthru", 00:13:44.833 "block_size": 512, 00:13:44.833 "num_blocks": 65536, 00:13:44.833 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:44.833 "assigned_rate_limits": { 00:13:44.833 "rw_ios_per_sec": 0, 00:13:44.833 "rw_mbytes_per_sec": 0, 00:13:44.833 "r_mbytes_per_sec": 0, 00:13:44.833 "w_mbytes_per_sec": 0 00:13:44.833 }, 00:13:44.833 "claimed": true, 00:13:44.833 "claim_type": "exclusive_write", 00:13:44.833 "zoned": false, 00:13:44.833 "supported_io_types": { 00:13:44.833 "read": true, 00:13:44.833 "write": true, 00:13:44.833 "unmap": true, 00:13:44.833 "flush": true, 00:13:44.833 "reset": true, 00:13:44.833 "nvme_admin": false, 00:13:44.833 "nvme_io": false, 00:13:44.833 "nvme_io_md": false, 00:13:44.833 "write_zeroes": true, 00:13:44.833 "zcopy": true, 00:13:44.833 "get_zone_info": false, 00:13:44.833 "zone_management": false, 00:13:44.833 "zone_append": false, 00:13:44.833 "compare": false, 00:13:44.833 "compare_and_write": false, 00:13:44.833 "abort": true, 00:13:44.833 "seek_hole": false, 00:13:44.833 "seek_data": false, 00:13:44.833 "copy": true, 00:13:44.833 "nvme_iov_md": false 00:13:44.833 }, 00:13:44.833 "memory_domains": [ 00:13:44.833 { 00:13:44.833 "dma_device_id": "system", 00:13:44.833 "dma_device_type": 1 00:13:44.833 }, 00:13:44.833 { 00:13:44.833 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:44.833 "dma_device_type": 2 00:13:44.833 } 00:13:44.833 ], 00:13:44.833 "driver_specific": { 00:13:44.833 "passthru": { 00:13:44.833 "name": "pt4", 00:13:44.833 "base_bdev_name": "malloc4" 00:13:44.833 } 00:13:44.833 } 00:13:44.833 }' 00:13:44.833 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.833 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:44.833 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:44.833 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.833 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:44.833 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:44.833 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.833 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:44.833 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:44.833 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.833 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:44.833 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:44.833 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:44.833 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:13:45.092 [2024-07-14 21:12:56.599297] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:45.092 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=d602448c-4225-11ef-aa83-81fbc7dfef58 00:13:45.092 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z d602448c-4225-11ef-aa83-81fbc7dfef58 ']' 00:13:45.092 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:45.351 [2024-07-14 21:12:56.871256] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:45.351 [2024-07-14 21:12:56.871273] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:45.351 [2024-07-14 21:12:56.871310] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:45.351 [2024-07-14 21:12:56.871325] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:45.351 [2024-07-14 21:12:56.871329] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x361352e35900 name raid_bdev1, state offline 00:13:45.351 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:45.351 21:12:56 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:13:45.918 21:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:13:45.918 21:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:13:45.918 21:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:45.918 21:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:13:45.918 21:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:45.918 21:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:46.176 21:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:46.176 21:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:13:46.435 21:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:13:46.435 21:12:57 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:13:46.694 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:13:46.694 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:13:46.953 [2024-07-14 21:12:58.455307] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:13:46.953 [2024-07-14 21:12:58.456009] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:13:46.953 [2024-07-14 21:12:58.456028] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:13:46.953 [2024-07-14 21:12:58.456036] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:13:46.953 [2024-07-14 21:12:58.456050] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:13:46.953 [2024-07-14 21:12:58.456093] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:13:46.953 [2024-07-14 21:12:58.456104] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:13:46.953 [2024-07-14 21:12:58.456114] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:13:46.953 [2024-07-14 21:12:58.456122] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:46.953 [2024-07-14 21:12:58.456126] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x361352e35680 name raid_bdev1, state configuring 00:13:46.953 request: 00:13:46.953 { 00:13:46.953 "name": "raid_bdev1", 00:13:46.953 "raid_level": "raid0", 00:13:46.953 "base_bdevs": [ 00:13:46.953 "malloc1", 00:13:46.953 "malloc2", 00:13:46.953 "malloc3", 00:13:46.953 "malloc4" 00:13:46.953 ], 00:13:46.953 "strip_size_kb": 64, 00:13:46.953 "superblock": false, 00:13:46.953 "method": "bdev_raid_create", 00:13:46.953 "req_id": 1 00:13:46.953 } 00:13:46.953 Got JSON-RPC error response 00:13:46.953 response: 00:13:46.953 { 00:13:46.953 "code": -17, 00:13:46.953 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:13:46.953 } 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:13:46.953 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.212 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:13:47.212 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:13:47.212 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:13:47.470 [2024-07-14 21:12:58.979313] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:13:47.470 [2024-07-14 21:12:58.979373] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:47.470 [2024-07-14 21:12:58.979414] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x361352e35180 00:13:47.470 [2024-07-14 21:12:58.979421] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:47.471 [2024-07-14 21:12:58.980147] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:47.471 [2024-07-14 21:12:58.980173] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:13:47.471 [2024-07-14 21:12:58.980198] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:13:47.471 [2024-07-14 21:12:58.980213] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:13:47.471 pt1 00:13:47.471 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:47.471 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:47.471 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:47.471 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:47.471 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:47.471 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:47.471 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:47.471 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:47.471 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:47.471 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:47.471 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:47.471 21:12:58 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:47.729 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:47.729 "name": "raid_bdev1", 00:13:47.729 "uuid": "d602448c-4225-11ef-aa83-81fbc7dfef58", 00:13:47.729 "strip_size_kb": 64, 00:13:47.729 "state": "configuring", 00:13:47.729 "raid_level": "raid0", 00:13:47.729 "superblock": true, 00:13:47.729 "num_base_bdevs": 4, 00:13:47.729 "num_base_bdevs_discovered": 1, 00:13:47.729 "num_base_bdevs_operational": 4, 00:13:47.729 "base_bdevs_list": [ 00:13:47.729 { 00:13:47.729 "name": "pt1", 00:13:47.729 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:47.729 "is_configured": true, 00:13:47.729 "data_offset": 2048, 00:13:47.729 "data_size": 63488 00:13:47.729 }, 00:13:47.729 { 00:13:47.729 "name": null, 00:13:47.729 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:47.729 "is_configured": false, 00:13:47.729 "data_offset": 2048, 00:13:47.729 "data_size": 63488 00:13:47.729 }, 00:13:47.729 { 00:13:47.729 "name": null, 00:13:47.729 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:47.729 "is_configured": false, 00:13:47.729 "data_offset": 2048, 00:13:47.729 "data_size": 63488 00:13:47.729 }, 00:13:47.729 { 00:13:47.729 "name": null, 00:13:47.729 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:47.729 "is_configured": false, 00:13:47.729 "data_offset": 2048, 00:13:47.729 "data_size": 63488 00:13:47.729 } 00:13:47.729 ] 00:13:47.729 }' 00:13:47.729 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:47.729 21:12:59 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:47.988 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:13:47.988 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:48.246 [2024-07-14 21:12:59.711339] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:48.246 [2024-07-14 21:12:59.711393] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:48.246 [2024-07-14 21:12:59.711420] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x361352e34780 00:13:48.246 [2024-07-14 21:12:59.711427] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:48.246 [2024-07-14 21:12:59.711552] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:48.246 [2024-07-14 21:12:59.711562] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:48.246 [2024-07-14 21:12:59.711594] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:48.246 [2024-07-14 21:12:59.711602] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:48.246 pt2 00:13:48.246 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:13:48.505 [2024-07-14 21:12:59.931352] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:13:48.505 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid0 64 4 00:13:48.505 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:48.505 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:13:48.505 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:48.505 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:48.505 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:48.505 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:48.505 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:48.505 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:48.505 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:48.505 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:48.505 21:12:59 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:48.765 21:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:48.765 "name": "raid_bdev1", 00:13:48.765 "uuid": "d602448c-4225-11ef-aa83-81fbc7dfef58", 00:13:48.765 "strip_size_kb": 64, 00:13:48.765 "state": "configuring", 00:13:48.765 "raid_level": "raid0", 00:13:48.765 "superblock": true, 00:13:48.765 "num_base_bdevs": 4, 00:13:48.765 "num_base_bdevs_discovered": 1, 00:13:48.765 "num_base_bdevs_operational": 4, 00:13:48.765 "base_bdevs_list": [ 00:13:48.765 { 00:13:48.765 "name": "pt1", 00:13:48.765 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:48.765 "is_configured": true, 00:13:48.765 "data_offset": 2048, 00:13:48.765 "data_size": 63488 00:13:48.765 }, 00:13:48.765 { 00:13:48.765 "name": null, 00:13:48.765 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:48.765 "is_configured": false, 00:13:48.765 "data_offset": 2048, 00:13:48.765 "data_size": 63488 00:13:48.765 }, 00:13:48.765 { 00:13:48.765 "name": null, 00:13:48.765 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:48.765 "is_configured": false, 00:13:48.765 "data_offset": 2048, 00:13:48.765 "data_size": 63488 00:13:48.765 }, 00:13:48.765 { 00:13:48.765 "name": null, 00:13:48.765 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:48.765 "is_configured": false, 00:13:48.765 "data_offset": 2048, 00:13:48.765 "data_size": 63488 00:13:48.765 } 00:13:48.765 ] 00:13:48.765 }' 00:13:48.765 21:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:48.765 21:13:00 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:49.023 21:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:13:49.023 21:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:49.023 21:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:13:49.282 [2024-07-14 21:13:00.683362] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:13:49.282 [2024-07-14 21:13:00.683421] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.282 [2024-07-14 21:13:00.683446] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x361352e34780 00:13:49.282 [2024-07-14 21:13:00.683453] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.282 [2024-07-14 21:13:00.683558] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.282 [2024-07-14 21:13:00.683567] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:13:49.282 [2024-07-14 21:13:00.683589] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:13:49.282 [2024-07-14 21:13:00.683596] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:13:49.282 pt2 00:13:49.282 21:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:49.282 21:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:49.282 21:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:13:49.541 [2024-07-14 21:13:00.899377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:13:49.541 [2024-07-14 21:13:00.899426] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.541 [2024-07-14 21:13:00.899452] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x361352e35b80 00:13:49.541 [2024-07-14 21:13:00.899459] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.541 [2024-07-14 21:13:00.899580] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.541 [2024-07-14 21:13:00.899590] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:13:49.541 [2024-07-14 21:13:00.899609] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:13:49.541 [2024-07-14 21:13:00.899616] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:13:49.541 pt3 00:13:49.541 21:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:49.541 21:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:49.541 21:13:00 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:13:49.800 [2024-07-14 21:13:01.163391] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:13:49.800 [2024-07-14 21:13:01.163452] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:49.800 [2024-07-14 21:13:01.163463] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x361352e35900 00:13:49.800 [2024-07-14 21:13:01.163470] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:49.800 [2024-07-14 21:13:01.163607] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:49.800 [2024-07-14 21:13:01.163623] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:13:49.800 [2024-07-14 21:13:01.163647] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:13:49.800 [2024-07-14 21:13:01.163655] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:13:49.800 [2024-07-14 21:13:01.163685] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x361352e34c80 00:13:49.800 [2024-07-14 21:13:01.163689] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:49.800 [2024-07-14 21:13:01.163709] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x361352e97e20 00:13:49.800 [2024-07-14 21:13:01.163791] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x361352e34c80 00:13:49.800 [2024-07-14 21:13:01.163796] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x361352e34c80 00:13:49.800 [2024-07-14 21:13:01.163817] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:49.800 pt4 00:13:49.800 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:13:49.800 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:13:49.800 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:49.800 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:49.800 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:49.800 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:49.800 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:49.800 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:49.800 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:49.800 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:49.800 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:49.800 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:49.800 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:49.800 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:50.060 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:50.060 "name": "raid_bdev1", 00:13:50.060 "uuid": "d602448c-4225-11ef-aa83-81fbc7dfef58", 00:13:50.060 "strip_size_kb": 64, 00:13:50.060 "state": "online", 00:13:50.060 "raid_level": "raid0", 00:13:50.060 "superblock": true, 00:13:50.060 "num_base_bdevs": 4, 00:13:50.060 "num_base_bdevs_discovered": 4, 00:13:50.060 "num_base_bdevs_operational": 4, 00:13:50.060 "base_bdevs_list": [ 00:13:50.060 { 00:13:50.060 "name": "pt1", 00:13:50.060 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:50.060 "is_configured": true, 00:13:50.060 "data_offset": 2048, 00:13:50.060 "data_size": 63488 00:13:50.060 }, 00:13:50.060 { 00:13:50.060 "name": "pt2", 00:13:50.060 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:50.060 "is_configured": true, 00:13:50.060 "data_offset": 2048, 00:13:50.060 "data_size": 63488 00:13:50.060 }, 00:13:50.060 { 00:13:50.060 "name": "pt3", 00:13:50.060 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:50.060 "is_configured": true, 00:13:50.060 "data_offset": 2048, 00:13:50.060 "data_size": 63488 00:13:50.060 }, 00:13:50.060 { 00:13:50.060 "name": "pt4", 00:13:50.060 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:50.060 "is_configured": true, 00:13:50.060 "data_offset": 2048, 00:13:50.060 "data_size": 63488 00:13:50.060 } 00:13:50.060 ] 00:13:50.060 }' 00:13:50.060 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:50.060 21:13:01 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:50.320 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:13:50.320 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:13:50.320 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:13:50.320 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:13:50.320 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:13:50.320 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:13:50.320 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:50.320 21:13:01 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:13:50.580 [2024-07-14 21:13:02.019457] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:50.580 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:13:50.580 "name": "raid_bdev1", 00:13:50.580 "aliases": [ 00:13:50.580 "d602448c-4225-11ef-aa83-81fbc7dfef58" 00:13:50.580 ], 00:13:50.580 "product_name": "Raid Volume", 00:13:50.580 "block_size": 512, 00:13:50.580 "num_blocks": 253952, 00:13:50.580 "uuid": "d602448c-4225-11ef-aa83-81fbc7dfef58", 00:13:50.580 "assigned_rate_limits": { 00:13:50.580 "rw_ios_per_sec": 0, 00:13:50.580 "rw_mbytes_per_sec": 0, 00:13:50.580 "r_mbytes_per_sec": 0, 00:13:50.580 "w_mbytes_per_sec": 0 00:13:50.580 }, 00:13:50.580 "claimed": false, 00:13:50.580 "zoned": false, 00:13:50.580 "supported_io_types": { 00:13:50.580 "read": true, 00:13:50.580 "write": true, 00:13:50.580 "unmap": true, 00:13:50.580 "flush": true, 00:13:50.580 "reset": true, 00:13:50.580 "nvme_admin": false, 00:13:50.580 "nvme_io": false, 00:13:50.580 "nvme_io_md": false, 00:13:50.580 "write_zeroes": true, 00:13:50.580 "zcopy": false, 00:13:50.580 "get_zone_info": false, 00:13:50.580 "zone_management": false, 00:13:50.580 "zone_append": false, 00:13:50.580 "compare": false, 00:13:50.580 "compare_and_write": false, 00:13:50.580 "abort": false, 00:13:50.580 "seek_hole": false, 00:13:50.580 "seek_data": false, 00:13:50.580 "copy": false, 00:13:50.580 "nvme_iov_md": false 00:13:50.580 }, 00:13:50.580 "memory_domains": [ 00:13:50.580 { 00:13:50.580 "dma_device_id": "system", 00:13:50.580 "dma_device_type": 1 00:13:50.580 }, 00:13:50.580 { 00:13:50.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.580 "dma_device_type": 2 00:13:50.580 }, 00:13:50.580 { 00:13:50.580 "dma_device_id": "system", 00:13:50.580 "dma_device_type": 1 00:13:50.580 }, 00:13:50.580 { 00:13:50.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.580 "dma_device_type": 2 00:13:50.580 }, 00:13:50.580 { 00:13:50.580 "dma_device_id": "system", 00:13:50.580 "dma_device_type": 1 00:13:50.580 }, 00:13:50.580 { 00:13:50.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.580 "dma_device_type": 2 00:13:50.580 }, 00:13:50.580 { 00:13:50.580 "dma_device_id": "system", 00:13:50.580 "dma_device_type": 1 00:13:50.580 }, 00:13:50.580 { 00:13:50.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.580 "dma_device_type": 2 00:13:50.580 } 00:13:50.580 ], 00:13:50.580 "driver_specific": { 00:13:50.580 "raid": { 00:13:50.580 "uuid": "d602448c-4225-11ef-aa83-81fbc7dfef58", 00:13:50.580 "strip_size_kb": 64, 00:13:50.580 "state": "online", 00:13:50.580 "raid_level": "raid0", 00:13:50.580 "superblock": true, 00:13:50.580 "num_base_bdevs": 4, 00:13:50.580 "num_base_bdevs_discovered": 4, 00:13:50.580 "num_base_bdevs_operational": 4, 00:13:50.580 "base_bdevs_list": [ 00:13:50.580 { 00:13:50.580 "name": "pt1", 00:13:50.580 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:50.580 "is_configured": true, 00:13:50.580 "data_offset": 2048, 00:13:50.580 "data_size": 63488 00:13:50.580 }, 00:13:50.580 { 00:13:50.580 "name": "pt2", 00:13:50.580 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:50.580 "is_configured": true, 00:13:50.580 "data_offset": 2048, 00:13:50.580 "data_size": 63488 00:13:50.580 }, 00:13:50.580 { 00:13:50.580 "name": "pt3", 00:13:50.580 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:50.580 "is_configured": true, 00:13:50.580 "data_offset": 2048, 00:13:50.580 "data_size": 63488 00:13:50.580 }, 00:13:50.580 { 00:13:50.580 "name": "pt4", 00:13:50.580 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:50.580 "is_configured": true, 00:13:50.580 "data_offset": 2048, 00:13:50.580 "data_size": 63488 00:13:50.580 } 00:13:50.580 ] 00:13:50.580 } 00:13:50.580 } 00:13:50.580 }' 00:13:50.580 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:13:50.580 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:13:50.580 pt2 00:13:50.580 pt3 00:13:50.580 pt4' 00:13:50.580 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:50.580 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:13:50.580 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:50.839 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:50.839 "name": "pt1", 00:13:50.839 "aliases": [ 00:13:50.839 "00000000-0000-0000-0000-000000000001" 00:13:50.839 ], 00:13:50.839 "product_name": "passthru", 00:13:50.839 "block_size": 512, 00:13:50.839 "num_blocks": 65536, 00:13:50.839 "uuid": "00000000-0000-0000-0000-000000000001", 00:13:50.839 "assigned_rate_limits": { 00:13:50.839 "rw_ios_per_sec": 0, 00:13:50.839 "rw_mbytes_per_sec": 0, 00:13:50.839 "r_mbytes_per_sec": 0, 00:13:50.839 "w_mbytes_per_sec": 0 00:13:50.839 }, 00:13:50.839 "claimed": true, 00:13:50.839 "claim_type": "exclusive_write", 00:13:50.839 "zoned": false, 00:13:50.839 "supported_io_types": { 00:13:50.839 "read": true, 00:13:50.839 "write": true, 00:13:50.839 "unmap": true, 00:13:50.839 "flush": true, 00:13:50.839 "reset": true, 00:13:50.839 "nvme_admin": false, 00:13:50.839 "nvme_io": false, 00:13:50.839 "nvme_io_md": false, 00:13:50.839 "write_zeroes": true, 00:13:50.839 "zcopy": true, 00:13:50.839 "get_zone_info": false, 00:13:50.839 "zone_management": false, 00:13:50.839 "zone_append": false, 00:13:50.840 "compare": false, 00:13:50.840 "compare_and_write": false, 00:13:50.840 "abort": true, 00:13:50.840 "seek_hole": false, 00:13:50.840 "seek_data": false, 00:13:50.840 "copy": true, 00:13:50.840 "nvme_iov_md": false 00:13:50.840 }, 00:13:50.840 "memory_domains": [ 00:13:50.840 { 00:13:50.840 "dma_device_id": "system", 00:13:50.840 "dma_device_type": 1 00:13:50.840 }, 00:13:50.840 { 00:13:50.840 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:50.840 "dma_device_type": 2 00:13:50.840 } 00:13:50.840 ], 00:13:50.840 "driver_specific": { 00:13:50.840 "passthru": { 00:13:50.840 "name": "pt1", 00:13:50.840 "base_bdev_name": "malloc1" 00:13:50.840 } 00:13:50.840 } 00:13:50.840 }' 00:13:50.840 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:50.840 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:50.840 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:50.840 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:50.840 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:50.840 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:50.840 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:50.840 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:50.840 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:50.840 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.840 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:50.840 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:50.840 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:50.840 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:50.840 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:51.099 "name": "pt2", 00:13:51.099 "aliases": [ 00:13:51.099 "00000000-0000-0000-0000-000000000002" 00:13:51.099 ], 00:13:51.099 "product_name": "passthru", 00:13:51.099 "block_size": 512, 00:13:51.099 "num_blocks": 65536, 00:13:51.099 "uuid": "00000000-0000-0000-0000-000000000002", 00:13:51.099 "assigned_rate_limits": { 00:13:51.099 "rw_ios_per_sec": 0, 00:13:51.099 "rw_mbytes_per_sec": 0, 00:13:51.099 "r_mbytes_per_sec": 0, 00:13:51.099 "w_mbytes_per_sec": 0 00:13:51.099 }, 00:13:51.099 "claimed": true, 00:13:51.099 "claim_type": "exclusive_write", 00:13:51.099 "zoned": false, 00:13:51.099 "supported_io_types": { 00:13:51.099 "read": true, 00:13:51.099 "write": true, 00:13:51.099 "unmap": true, 00:13:51.099 "flush": true, 00:13:51.099 "reset": true, 00:13:51.099 "nvme_admin": false, 00:13:51.099 "nvme_io": false, 00:13:51.099 "nvme_io_md": false, 00:13:51.099 "write_zeroes": true, 00:13:51.099 "zcopy": true, 00:13:51.099 "get_zone_info": false, 00:13:51.099 "zone_management": false, 00:13:51.099 "zone_append": false, 00:13:51.099 "compare": false, 00:13:51.099 "compare_and_write": false, 00:13:51.099 "abort": true, 00:13:51.099 "seek_hole": false, 00:13:51.099 "seek_data": false, 00:13:51.099 "copy": true, 00:13:51.099 "nvme_iov_md": false 00:13:51.099 }, 00:13:51.099 "memory_domains": [ 00:13:51.099 { 00:13:51.099 "dma_device_id": "system", 00:13:51.099 "dma_device_type": 1 00:13:51.099 }, 00:13:51.099 { 00:13:51.099 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.099 "dma_device_type": 2 00:13:51.099 } 00:13:51.099 ], 00:13:51.099 "driver_specific": { 00:13:51.099 "passthru": { 00:13:51.099 "name": "pt2", 00:13:51.099 "base_bdev_name": "malloc2" 00:13:51.099 } 00:13:51.099 } 00:13:51.099 }' 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:13:51.099 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:51.359 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:51.359 "name": "pt3", 00:13:51.359 "aliases": [ 00:13:51.359 "00000000-0000-0000-0000-000000000003" 00:13:51.359 ], 00:13:51.359 "product_name": "passthru", 00:13:51.359 "block_size": 512, 00:13:51.359 "num_blocks": 65536, 00:13:51.359 "uuid": "00000000-0000-0000-0000-000000000003", 00:13:51.359 "assigned_rate_limits": { 00:13:51.359 "rw_ios_per_sec": 0, 00:13:51.359 "rw_mbytes_per_sec": 0, 00:13:51.359 "r_mbytes_per_sec": 0, 00:13:51.359 "w_mbytes_per_sec": 0 00:13:51.359 }, 00:13:51.359 "claimed": true, 00:13:51.359 "claim_type": "exclusive_write", 00:13:51.359 "zoned": false, 00:13:51.359 "supported_io_types": { 00:13:51.359 "read": true, 00:13:51.359 "write": true, 00:13:51.359 "unmap": true, 00:13:51.359 "flush": true, 00:13:51.359 "reset": true, 00:13:51.359 "nvme_admin": false, 00:13:51.359 "nvme_io": false, 00:13:51.359 "nvme_io_md": false, 00:13:51.359 "write_zeroes": true, 00:13:51.359 "zcopy": true, 00:13:51.359 "get_zone_info": false, 00:13:51.359 "zone_management": false, 00:13:51.359 "zone_append": false, 00:13:51.359 "compare": false, 00:13:51.359 "compare_and_write": false, 00:13:51.359 "abort": true, 00:13:51.359 "seek_hole": false, 00:13:51.359 "seek_data": false, 00:13:51.359 "copy": true, 00:13:51.359 "nvme_iov_md": false 00:13:51.359 }, 00:13:51.359 "memory_domains": [ 00:13:51.359 { 00:13:51.359 "dma_device_id": "system", 00:13:51.359 "dma_device_type": 1 00:13:51.359 }, 00:13:51.359 { 00:13:51.359 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.359 "dma_device_type": 2 00:13:51.359 } 00:13:51.359 ], 00:13:51.359 "driver_specific": { 00:13:51.359 "passthru": { 00:13:51.359 "name": "pt3", 00:13:51.359 "base_bdev_name": "malloc3" 00:13:51.359 } 00:13:51.359 } 00:13:51.359 }' 00:13:51.359 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.359 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.359 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:51.359 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.359 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.359 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:51.359 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.618 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.618 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:51.618 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.618 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.618 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:51.618 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:13:51.618 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:13:51.618 21:13:02 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:13:51.875 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:13:51.875 "name": "pt4", 00:13:51.875 "aliases": [ 00:13:51.875 "00000000-0000-0000-0000-000000000004" 00:13:51.875 ], 00:13:51.875 "product_name": "passthru", 00:13:51.875 "block_size": 512, 00:13:51.875 "num_blocks": 65536, 00:13:51.875 "uuid": "00000000-0000-0000-0000-000000000004", 00:13:51.875 "assigned_rate_limits": { 00:13:51.875 "rw_ios_per_sec": 0, 00:13:51.875 "rw_mbytes_per_sec": 0, 00:13:51.875 "r_mbytes_per_sec": 0, 00:13:51.875 "w_mbytes_per_sec": 0 00:13:51.875 }, 00:13:51.875 "claimed": true, 00:13:51.875 "claim_type": "exclusive_write", 00:13:51.875 "zoned": false, 00:13:51.875 "supported_io_types": { 00:13:51.875 "read": true, 00:13:51.875 "write": true, 00:13:51.875 "unmap": true, 00:13:51.875 "flush": true, 00:13:51.875 "reset": true, 00:13:51.875 "nvme_admin": false, 00:13:51.875 "nvme_io": false, 00:13:51.875 "nvme_io_md": false, 00:13:51.875 "write_zeroes": true, 00:13:51.875 "zcopy": true, 00:13:51.875 "get_zone_info": false, 00:13:51.875 "zone_management": false, 00:13:51.875 "zone_append": false, 00:13:51.875 "compare": false, 00:13:51.875 "compare_and_write": false, 00:13:51.875 "abort": true, 00:13:51.875 "seek_hole": false, 00:13:51.875 "seek_data": false, 00:13:51.875 "copy": true, 00:13:51.875 "nvme_iov_md": false 00:13:51.875 }, 00:13:51.875 "memory_domains": [ 00:13:51.875 { 00:13:51.875 "dma_device_id": "system", 00:13:51.875 "dma_device_type": 1 00:13:51.875 }, 00:13:51.875 { 00:13:51.875 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:13:51.875 "dma_device_type": 2 00:13:51.875 } 00:13:51.875 ], 00:13:51.875 "driver_specific": { 00:13:51.875 "passthru": { 00:13:51.875 "name": "pt4", 00:13:51.875 "base_bdev_name": "malloc4" 00:13:51.875 } 00:13:51.875 } 00:13:51.875 }' 00:13:51.875 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.875 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:13:51.875 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:13:51.875 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.875 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:13:51.875 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:13:51.875 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.875 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:13:51.875 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:13:51.875 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.875 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:13:51.875 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:13:51.875 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:13:51.875 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:13:52.133 [2024-07-14 21:13:03.519660] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' d602448c-4225-11ef-aa83-81fbc7dfef58 '!=' d602448c-4225-11ef-aa83-81fbc7dfef58 ']' 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid0 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 59915 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 59915 ']' 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 59915 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 59915 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:13:52.133 killing process with pid 59915 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 59915' 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 59915 00:13:52.133 [2024-07-14 21:13:03.547720] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:52.133 [2024-07-14 21:13:03.547748] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:52.133 [2024-07-14 21:13:03.547778] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:52.133 [2024-07-14 21:13:03.547783] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x361352e34c80 name raid_bdev1, state offline 00:13:52.133 21:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 59915 00:13:52.133 [2024-07-14 21:13:03.572401] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:52.392 21:13:03 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:13:52.392 00:13:52.392 real 0m12.469s 00:13:52.392 user 0m21.963s 00:13:52.392 sys 0m2.215s 00:13:52.392 21:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:52.392 21:13:03 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.392 ************************************ 00:13:52.392 END TEST raid_superblock_test 00:13:52.392 ************************************ 00:13:52.392 21:13:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:52.392 21:13:03 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid0 4 read 00:13:52.392 21:13:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:52.392 21:13:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:52.392 21:13:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:52.392 ************************************ 00:13:52.392 START TEST raid_read_error_test 00:13:52.392 ************************************ 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 read 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.8saSIKVKIF 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60312 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60312 /var/tmp/spdk-raid.sock 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 60312 ']' 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:52.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:52.392 21:13:03 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:52.392 [2024-07-14 21:13:03.810232] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:52.392 [2024-07-14 21:13:03.810495] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:52.959 EAL: TSC is not safe to use in SMP mode 00:13:52.959 EAL: TSC is not invariant 00:13:52.959 [2024-07-14 21:13:04.337756] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:52.959 [2024-07-14 21:13:04.424970] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:52.959 [2024-07-14 21:13:04.427226] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:52.959 [2024-07-14 21:13:04.428041] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:52.959 [2024-07-14 21:13:04.428048] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:53.526 21:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:53.526 21:13:04 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:13:53.526 21:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:53.526 21:13:04 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:13:53.526 BaseBdev1_malloc 00:13:53.783 21:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:13:53.783 true 00:13:53.783 21:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:13:54.041 [2024-07-14 21:13:05.489377] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:13:54.041 [2024-07-14 21:13:05.489447] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.041 [2024-07-14 21:13:05.489471] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x351d95034780 00:13:54.041 [2024-07-14 21:13:05.489478] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.041 [2024-07-14 21:13:05.490200] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.041 [2024-07-14 21:13:05.490222] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:13:54.041 BaseBdev1 00:13:54.041 21:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:54.041 21:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:13:54.299 BaseBdev2_malloc 00:13:54.299 21:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:13:54.556 true 00:13:54.556 21:13:05 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:13:54.826 [2024-07-14 21:13:06.173403] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:13:54.826 [2024-07-14 21:13:06.173480] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:54.826 [2024-07-14 21:13:06.173505] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x351d95034c80 00:13:54.826 [2024-07-14 21:13:06.173513] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:54.826 [2024-07-14 21:13:06.174253] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:54.826 [2024-07-14 21:13:06.174277] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:13:54.826 BaseBdev2 00:13:54.826 21:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:54.826 21:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:13:55.084 BaseBdev3_malloc 00:13:55.084 21:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:13:55.342 true 00:13:55.342 21:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:13:55.342 [2024-07-14 21:13:06.857428] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:13:55.342 [2024-07-14 21:13:06.857485] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:55.342 [2024-07-14 21:13:06.857539] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x351d95035180 00:13:55.342 [2024-07-14 21:13:06.857546] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:55.342 [2024-07-14 21:13:06.858218] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:55.342 [2024-07-14 21:13:06.858270] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:13:55.342 BaseBdev3 00:13:55.342 21:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:13:55.342 21:13:06 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:13:55.600 BaseBdev4_malloc 00:13:55.600 21:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:13:55.858 true 00:13:55.858 21:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:13:56.116 [2024-07-14 21:13:07.537448] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:13:56.116 [2024-07-14 21:13:07.537505] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:13:56.117 [2024-07-14 21:13:07.537561] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x351d95035680 00:13:56.117 [2024-07-14 21:13:07.537568] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:13:56.117 [2024-07-14 21:13:07.538274] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:13:56.117 [2024-07-14 21:13:07.538296] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:13:56.117 BaseBdev4 00:13:56.117 21:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:13:56.375 [2024-07-14 21:13:07.797478] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:13:56.375 [2024-07-14 21:13:07.798089] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:13:56.375 [2024-07-14 21:13:07.798113] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:13:56.375 [2024-07-14 21:13:07.798142] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:13:56.375 [2024-07-14 21:13:07.798205] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x351d95035900 00:13:56.375 [2024-07-14 21:13:07.798210] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:13:56.375 [2024-07-14 21:13:07.798276] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x351d950a0e20 00:13:56.375 [2024-07-14 21:13:07.798362] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x351d95035900 00:13:56.375 [2024-07-14 21:13:07.798366] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x351d95035900 00:13:56.375 [2024-07-14 21:13:07.798392] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:56.375 21:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:56.375 21:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:56.375 21:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:56.375 21:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:56.375 21:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:56.375 21:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:56.375 21:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:56.375 21:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:56.375 21:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:56.375 21:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:56.375 21:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:56.375 21:13:07 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:56.633 21:13:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:56.633 "name": "raid_bdev1", 00:13:56.633 "uuid": "de06a2ac-4225-11ef-aa83-81fbc7dfef58", 00:13:56.633 "strip_size_kb": 64, 00:13:56.633 "state": "online", 00:13:56.633 "raid_level": "raid0", 00:13:56.633 "superblock": true, 00:13:56.633 "num_base_bdevs": 4, 00:13:56.633 "num_base_bdevs_discovered": 4, 00:13:56.633 "num_base_bdevs_operational": 4, 00:13:56.633 "base_bdevs_list": [ 00:13:56.633 { 00:13:56.633 "name": "BaseBdev1", 00:13:56.633 "uuid": "ca2498d4-5f6c-d457-89bb-124ba4867aeb", 00:13:56.633 "is_configured": true, 00:13:56.633 "data_offset": 2048, 00:13:56.633 "data_size": 63488 00:13:56.633 }, 00:13:56.633 { 00:13:56.633 "name": "BaseBdev2", 00:13:56.633 "uuid": "882be22a-bd78-5d57-9d13-23b388ad46fe", 00:13:56.633 "is_configured": true, 00:13:56.633 "data_offset": 2048, 00:13:56.633 "data_size": 63488 00:13:56.633 }, 00:13:56.633 { 00:13:56.633 "name": "BaseBdev3", 00:13:56.633 "uuid": "0f9d44f5-21d6-f45f-ba78-f01af521dff3", 00:13:56.633 "is_configured": true, 00:13:56.633 "data_offset": 2048, 00:13:56.633 "data_size": 63488 00:13:56.633 }, 00:13:56.633 { 00:13:56.633 "name": "BaseBdev4", 00:13:56.633 "uuid": "4f40cd5c-c571-ce54-919f-b694ab2e9b27", 00:13:56.633 "is_configured": true, 00:13:56.633 "data_offset": 2048, 00:13:56.633 "data_size": 63488 00:13:56.633 } 00:13:56.633 ] 00:13:56.633 }' 00:13:56.634 21:13:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:56.634 21:13:08 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:56.892 21:13:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:13:56.892 21:13:08 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:13:57.151 [2024-07-14 21:13:08.501703] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x351d950a0ec0 00:13:58.098 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:13:58.355 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:13:58.355 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:13:58.355 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:13:58.355 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:13:58.355 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:13:58.355 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:13:58.355 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:13:58.355 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:13:58.355 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:13:58.355 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:13:58.355 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:13:58.355 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:13:58.355 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:13:58.355 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:13:58.355 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:13:58.613 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:13:58.613 "name": "raid_bdev1", 00:13:58.613 "uuid": "de06a2ac-4225-11ef-aa83-81fbc7dfef58", 00:13:58.613 "strip_size_kb": 64, 00:13:58.613 "state": "online", 00:13:58.613 "raid_level": "raid0", 00:13:58.613 "superblock": true, 00:13:58.613 "num_base_bdevs": 4, 00:13:58.613 "num_base_bdevs_discovered": 4, 00:13:58.613 "num_base_bdevs_operational": 4, 00:13:58.613 "base_bdevs_list": [ 00:13:58.613 { 00:13:58.613 "name": "BaseBdev1", 00:13:58.613 "uuid": "ca2498d4-5f6c-d457-89bb-124ba4867aeb", 00:13:58.613 "is_configured": true, 00:13:58.613 "data_offset": 2048, 00:13:58.613 "data_size": 63488 00:13:58.613 }, 00:13:58.613 { 00:13:58.613 "name": "BaseBdev2", 00:13:58.613 "uuid": "882be22a-bd78-5d57-9d13-23b388ad46fe", 00:13:58.613 "is_configured": true, 00:13:58.613 "data_offset": 2048, 00:13:58.613 "data_size": 63488 00:13:58.613 }, 00:13:58.613 { 00:13:58.613 "name": "BaseBdev3", 00:13:58.613 "uuid": "0f9d44f5-21d6-f45f-ba78-f01af521dff3", 00:13:58.613 "is_configured": true, 00:13:58.613 "data_offset": 2048, 00:13:58.613 "data_size": 63488 00:13:58.613 }, 00:13:58.613 { 00:13:58.613 "name": "BaseBdev4", 00:13:58.613 "uuid": "4f40cd5c-c571-ce54-919f-b694ab2e9b27", 00:13:58.613 "is_configured": true, 00:13:58.613 "data_offset": 2048, 00:13:58.613 "data_size": 63488 00:13:58.613 } 00:13:58.613 ] 00:13:58.613 }' 00:13:58.613 21:13:09 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:13:58.613 21:13:09 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:58.871 21:13:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:13:59.130 [2024-07-14 21:13:10.479395] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:13:59.130 [2024-07-14 21:13:10.479423] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:13:59.130 [2024-07-14 21:13:10.479737] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:13:59.130 [2024-07-14 21:13:10.479747] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:13:59.130 [2024-07-14 21:13:10.479754] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:13:59.130 [2024-07-14 21:13:10.479758] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x351d95035900 name raid_bdev1, state offline 00:13:59.130 0 00:13:59.130 21:13:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60312 00:13:59.130 21:13:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 60312 ']' 00:13:59.130 21:13:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 60312 00:13:59.130 21:13:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:13:59.130 21:13:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:13:59.130 21:13:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 60312 00:13:59.130 21:13:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:13:59.130 21:13:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:13:59.130 21:13:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:13:59.130 21:13:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60312' 00:13:59.130 killing process with pid 60312 00:13:59.130 21:13:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 60312 00:13:59.130 [2024-07-14 21:13:10.509315] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:13:59.130 21:13:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 60312 00:13:59.130 [2024-07-14 21:13:10.532952] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:13:59.389 21:13:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:13:59.389 21:13:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.8saSIKVKIF 00:13:59.389 21:13:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:13:59.389 21:13:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.51 00:13:59.389 21:13:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:13:59.389 21:13:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:13:59.389 21:13:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:13:59.389 21:13:10 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.51 != \0\.\0\0 ]] 00:13:59.389 00:13:59.389 real 0m6.918s 00:13:59.389 user 0m11.074s 00:13:59.389 sys 0m1.076s 00:13:59.389 21:13:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:59.389 21:13:10 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.389 ************************************ 00:13:59.389 END TEST raid_read_error_test 00:13:59.389 ************************************ 00:13:59.389 21:13:10 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:13:59.389 21:13:10 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid0 4 write 00:13:59.389 21:13:10 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:13:59.389 21:13:10 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:59.389 21:13:10 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:13:59.389 ************************************ 00:13:59.389 START TEST raid_write_error_test 00:13:59.389 ************************************ 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid0 4 write 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid0 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid0 '!=' raid1 ']' 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.GL5DkrUy3n 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=60450 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 60450 /var/tmp/spdk-raid.sock 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 60450 ']' 00:13:59.389 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:59.389 21:13:10 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:13:59.389 [2024-07-14 21:13:10.776372] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:13:59.389 [2024-07-14 21:13:10.776543] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:13:59.967 EAL: TSC is not safe to use in SMP mode 00:13:59.967 EAL: TSC is not invariant 00:13:59.967 [2024-07-14 21:13:11.279611] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:59.967 [2024-07-14 21:13:11.362709] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:13:59.967 [2024-07-14 21:13:11.364823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.967 [2024-07-14 21:13:11.365590] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:13:59.967 [2024-07-14 21:13:11.365603] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:00.533 21:13:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:00.533 21:13:11 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:14:00.533 21:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:00.533 21:13:11 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:14:00.533 BaseBdev1_malloc 00:14:00.533 21:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:14:00.790 true 00:14:00.790 21:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:14:01.048 [2024-07-14 21:13:12.577851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:14:01.048 [2024-07-14 21:13:12.577937] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.048 [2024-07-14 21:13:12.577962] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e2440434780 00:14:01.048 [2024-07-14 21:13:12.577970] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.048 [2024-07-14 21:13:12.578601] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.048 [2024-07-14 21:13:12.578625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:14:01.048 BaseBdev1 00:14:01.048 21:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:01.048 21:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:14:01.306 BaseBdev2_malloc 00:14:01.306 21:13:12 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:14:01.564 true 00:14:01.564 21:13:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:14:01.821 [2024-07-14 21:13:13.313871] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:14:01.821 [2024-07-14 21:13:13.313935] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:01.821 [2024-07-14 21:13:13.313961] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e2440434c80 00:14:01.821 [2024-07-14 21:13:13.313968] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:01.821 [2024-07-14 21:13:13.314627] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:01.821 [2024-07-14 21:13:13.314649] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:14:01.821 BaseBdev2 00:14:01.821 21:13:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:01.821 21:13:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:14:02.079 BaseBdev3_malloc 00:14:02.079 21:13:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:14:02.337 true 00:14:02.337 21:13:13 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:14:02.595 [2024-07-14 21:13:13.989882] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:14:02.595 [2024-07-14 21:13:13.989951] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:02.595 [2024-07-14 21:13:13.989980] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e2440435180 00:14:02.595 [2024-07-14 21:13:13.989988] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:02.595 [2024-07-14 21:13:13.990803] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:02.595 [2024-07-14 21:13:13.990829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:14:02.595 BaseBdev3 00:14:02.595 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:14:02.595 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:14:02.853 BaseBdev4_malloc 00:14:02.853 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:14:03.111 true 00:14:03.111 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:14:03.111 [2024-07-14 21:13:14.657923] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:14:03.111 [2024-07-14 21:13:14.657996] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:03.111 [2024-07-14 21:13:14.658034] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e2440435680 00:14:03.111 [2024-07-14 21:13:14.658042] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:03.111 [2024-07-14 21:13:14.658773] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:03.111 [2024-07-14 21:13:14.658795] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:14:03.370 BaseBdev4 00:14:03.370 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r raid0 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:14:03.627 [2024-07-14 21:13:14.917911] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:03.628 [2024-07-14 21:13:14.918674] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:03.628 [2024-07-14 21:13:14.918700] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:03.628 [2024-07-14 21:13:14.918717] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:03.628 [2024-07-14 21:13:14.918802] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e2440435900 00:14:03.628 [2024-07-14 21:13:14.918810] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:03.628 [2024-07-14 21:13:14.918853] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e24404a0e20 00:14:03.628 [2024-07-14 21:13:14.918941] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e2440435900 00:14:03.628 [2024-07-14 21:13:14.918945] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e2440435900 00:14:03.628 [2024-07-14 21:13:14.918975] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:03.628 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:03.628 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:03.628 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:03.628 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:03.628 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:03.628 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:03.628 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:03.628 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:03.628 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:03.628 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:03.628 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:03.628 21:13:14 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:03.886 21:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:03.886 "name": "raid_bdev1", 00:14:03.886 "uuid": "e245207e-4225-11ef-aa83-81fbc7dfef58", 00:14:03.886 "strip_size_kb": 64, 00:14:03.886 "state": "online", 00:14:03.886 "raid_level": "raid0", 00:14:03.886 "superblock": true, 00:14:03.886 "num_base_bdevs": 4, 00:14:03.886 "num_base_bdevs_discovered": 4, 00:14:03.886 "num_base_bdevs_operational": 4, 00:14:03.886 "base_bdevs_list": [ 00:14:03.886 { 00:14:03.886 "name": "BaseBdev1", 00:14:03.886 "uuid": "ba57ec3c-317c-5951-ad2e-d7d6b42b7306", 00:14:03.886 "is_configured": true, 00:14:03.886 "data_offset": 2048, 00:14:03.886 "data_size": 63488 00:14:03.886 }, 00:14:03.886 { 00:14:03.886 "name": "BaseBdev2", 00:14:03.886 "uuid": "60d06cf8-0bba-9656-bd5a-1a317313ff31", 00:14:03.886 "is_configured": true, 00:14:03.886 "data_offset": 2048, 00:14:03.886 "data_size": 63488 00:14:03.886 }, 00:14:03.886 { 00:14:03.886 "name": "BaseBdev3", 00:14:03.886 "uuid": "90537384-b610-a159-9305-0add3aa044f9", 00:14:03.886 "is_configured": true, 00:14:03.886 "data_offset": 2048, 00:14:03.886 "data_size": 63488 00:14:03.886 }, 00:14:03.886 { 00:14:03.886 "name": "BaseBdev4", 00:14:03.886 "uuid": "d465855e-513f-ba5c-9e32-213655610b72", 00:14:03.886 "is_configured": true, 00:14:03.886 "data_offset": 2048, 00:14:03.886 "data_size": 63488 00:14:03.886 } 00:14:03.886 ] 00:14:03.886 }' 00:14:03.886 21:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:03.886 21:13:15 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:04.144 21:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:14:04.144 21:13:15 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:14:04.144 [2024-07-14 21:13:15.578147] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e24404a0ec0 00:14:05.080 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:14:05.338 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:14:05.338 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid0 = \r\a\i\d\1 ]] 00:14:05.338 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:14:05.338 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid0 64 4 00:14:05.338 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:14:05.338 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:05.338 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid0 00:14:05.338 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:05.338 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:05.338 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:05.338 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:05.338 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:05.338 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:05.338 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:05.338 21:13:16 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:14:05.596 21:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:05.596 "name": "raid_bdev1", 00:14:05.596 "uuid": "e245207e-4225-11ef-aa83-81fbc7dfef58", 00:14:05.596 "strip_size_kb": 64, 00:14:05.596 "state": "online", 00:14:05.596 "raid_level": "raid0", 00:14:05.596 "superblock": true, 00:14:05.596 "num_base_bdevs": 4, 00:14:05.596 "num_base_bdevs_discovered": 4, 00:14:05.596 "num_base_bdevs_operational": 4, 00:14:05.596 "base_bdevs_list": [ 00:14:05.596 { 00:14:05.596 "name": "BaseBdev1", 00:14:05.596 "uuid": "ba57ec3c-317c-5951-ad2e-d7d6b42b7306", 00:14:05.596 "is_configured": true, 00:14:05.596 "data_offset": 2048, 00:14:05.596 "data_size": 63488 00:14:05.596 }, 00:14:05.596 { 00:14:05.596 "name": "BaseBdev2", 00:14:05.596 "uuid": "60d06cf8-0bba-9656-bd5a-1a317313ff31", 00:14:05.596 "is_configured": true, 00:14:05.596 "data_offset": 2048, 00:14:05.596 "data_size": 63488 00:14:05.596 }, 00:14:05.596 { 00:14:05.596 "name": "BaseBdev3", 00:14:05.596 "uuid": "90537384-b610-a159-9305-0add3aa044f9", 00:14:05.596 "is_configured": true, 00:14:05.596 "data_offset": 2048, 00:14:05.596 "data_size": 63488 00:14:05.596 }, 00:14:05.596 { 00:14:05.596 "name": "BaseBdev4", 00:14:05.596 "uuid": "d465855e-513f-ba5c-9e32-213655610b72", 00:14:05.596 "is_configured": true, 00:14:05.596 "data_offset": 2048, 00:14:05.596 "data_size": 63488 00:14:05.596 } 00:14:05.596 ] 00:14:05.596 }' 00:14:05.596 21:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:05.596 21:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:05.854 21:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:14:06.113 [2024-07-14 21:13:17.505180] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:14:06.113 [2024-07-14 21:13:17.505215] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:06.113 [2024-07-14 21:13:17.505591] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:06.113 [2024-07-14 21:13:17.505609] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:06.113 [2024-07-14 21:13:17.505618] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:06.113 [2024-07-14 21:13:17.505623] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e2440435900 name raid_bdev1, state offline 00:14:06.113 0 00:14:06.113 21:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 60450 00:14:06.113 21:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 60450 ']' 00:14:06.113 21:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 60450 00:14:06.113 21:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:14:06.113 21:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:06.113 21:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 60450 00:14:06.113 21:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:14:06.113 21:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:14:06.113 21:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:14:06.113 killing process with pid 60450 00:14:06.113 21:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60450' 00:14:06.113 21:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 60450 00:14:06.113 [2024-07-14 21:13:17.528425] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:06.113 21:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 60450 00:14:06.113 [2024-07-14 21:13:17.559700] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:06.371 21:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.GL5DkrUy3n 00:14:06.371 21:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:14:06.371 21:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:14:06.371 21:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.52 00:14:06.371 21:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid0 00:14:06.371 21:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:06.371 21:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:06.371 21:13:17 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.52 != \0\.\0\0 ]] 00:14:06.371 00:14:06.371 real 0m7.030s 00:14:06.371 user 0m11.242s 00:14:06.371 sys 0m0.977s 00:14:06.371 21:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:06.371 ************************************ 00:14:06.371 END TEST raid_write_error_test 00:14:06.371 ************************************ 00:14:06.371 21:13:17 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.371 21:13:17 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:06.371 21:13:17 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:14:06.371 21:13:17 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test concat 4 false 00:14:06.371 21:13:17 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:06.371 21:13:17 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:06.371 21:13:17 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:06.371 ************************************ 00:14:06.371 START TEST raid_state_function_test 00:14:06.371 ************************************ 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 false 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=60586 00:14:06.371 Process raid pid: 60586 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 60586' 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 60586 /var/tmp/spdk-raid.sock 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 60586 ']' 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:06.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:06.371 21:13:17 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:06.372 [2024-07-14 21:13:17.858040] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:06.372 [2024-07-14 21:13:17.858324] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:06.939 EAL: TSC is not safe to use in SMP mode 00:14:06.939 EAL: TSC is not invariant 00:14:06.939 [2024-07-14 21:13:18.416102] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.198 [2024-07-14 21:13:18.520738] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:07.198 [2024-07-14 21:13:18.523406] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.198 [2024-07-14 21:13:18.524376] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.198 [2024-07-14 21:13:18.524393] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:07.457 21:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:07.457 21:13:18 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:14:07.457 21:13:18 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:07.715 [2024-07-14 21:13:19.027142] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:07.715 [2024-07-14 21:13:19.027216] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:07.715 [2024-07-14 21:13:19.027234] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:07.715 [2024-07-14 21:13:19.027246] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:07.715 [2024-07-14 21:13:19.027250] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:07.715 [2024-07-14 21:13:19.027260] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:07.715 [2024-07-14 21:13:19.027265] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:07.715 [2024-07-14 21:13:19.027274] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:07.715 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:07.715 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:07.715 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:07.715 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:07.715 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:07.715 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:07.715 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:07.715 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:07.716 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:07.716 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:07.716 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:07.716 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:07.974 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:07.974 "name": "Existed_Raid", 00:14:07.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.974 "strip_size_kb": 64, 00:14:07.974 "state": "configuring", 00:14:07.974 "raid_level": "concat", 00:14:07.974 "superblock": false, 00:14:07.974 "num_base_bdevs": 4, 00:14:07.974 "num_base_bdevs_discovered": 0, 00:14:07.974 "num_base_bdevs_operational": 4, 00:14:07.974 "base_bdevs_list": [ 00:14:07.974 { 00:14:07.974 "name": "BaseBdev1", 00:14:07.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.974 "is_configured": false, 00:14:07.974 "data_offset": 0, 00:14:07.974 "data_size": 0 00:14:07.974 }, 00:14:07.974 { 00:14:07.974 "name": "BaseBdev2", 00:14:07.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.974 "is_configured": false, 00:14:07.974 "data_offset": 0, 00:14:07.974 "data_size": 0 00:14:07.974 }, 00:14:07.974 { 00:14:07.974 "name": "BaseBdev3", 00:14:07.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.974 "is_configured": false, 00:14:07.974 "data_offset": 0, 00:14:07.974 "data_size": 0 00:14:07.974 }, 00:14:07.974 { 00:14:07.974 "name": "BaseBdev4", 00:14:07.974 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:07.974 "is_configured": false, 00:14:07.974 "data_offset": 0, 00:14:07.974 "data_size": 0 00:14:07.974 } 00:14:07.974 ] 00:14:07.974 }' 00:14:07.974 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:07.974 21:13:19 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:08.237 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:08.495 [2024-07-14 21:13:19.911130] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:08.495 [2024-07-14 21:13:19.911161] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa72ed434500 name Existed_Raid, state configuring 00:14:08.495 21:13:19 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:08.754 [2024-07-14 21:13:20.183156] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:08.754 [2024-07-14 21:13:20.183233] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:08.754 [2024-07-14 21:13:20.183238] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:08.754 [2024-07-14 21:13:20.183257] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:08.754 [2024-07-14 21:13:20.183260] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:08.754 [2024-07-14 21:13:20.183267] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:08.754 [2024-07-14 21:13:20.183270] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:08.754 [2024-07-14 21:13:20.183276] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:08.754 21:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:09.012 [2024-07-14 21:13:20.396322] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:09.012 BaseBdev1 00:14:09.012 21:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:09.012 21:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:09.012 21:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:09.012 21:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:09.012 21:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:09.012 21:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:09.012 21:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:09.271 21:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:09.529 [ 00:14:09.529 { 00:14:09.529 "name": "BaseBdev1", 00:14:09.529 "aliases": [ 00:14:09.529 "e588e3a3-4225-11ef-aa83-81fbc7dfef58" 00:14:09.529 ], 00:14:09.529 "product_name": "Malloc disk", 00:14:09.529 "block_size": 512, 00:14:09.529 "num_blocks": 65536, 00:14:09.529 "uuid": "e588e3a3-4225-11ef-aa83-81fbc7dfef58", 00:14:09.529 "assigned_rate_limits": { 00:14:09.529 "rw_ios_per_sec": 0, 00:14:09.529 "rw_mbytes_per_sec": 0, 00:14:09.529 "r_mbytes_per_sec": 0, 00:14:09.529 "w_mbytes_per_sec": 0 00:14:09.529 }, 00:14:09.529 "claimed": true, 00:14:09.529 "claim_type": "exclusive_write", 00:14:09.529 "zoned": false, 00:14:09.529 "supported_io_types": { 00:14:09.529 "read": true, 00:14:09.529 "write": true, 00:14:09.529 "unmap": true, 00:14:09.529 "flush": true, 00:14:09.529 "reset": true, 00:14:09.530 "nvme_admin": false, 00:14:09.530 "nvme_io": false, 00:14:09.530 "nvme_io_md": false, 00:14:09.530 "write_zeroes": true, 00:14:09.530 "zcopy": true, 00:14:09.530 "get_zone_info": false, 00:14:09.530 "zone_management": false, 00:14:09.530 "zone_append": false, 00:14:09.530 "compare": false, 00:14:09.530 "compare_and_write": false, 00:14:09.530 "abort": true, 00:14:09.530 "seek_hole": false, 00:14:09.530 "seek_data": false, 00:14:09.530 "copy": true, 00:14:09.530 "nvme_iov_md": false 00:14:09.530 }, 00:14:09.530 "memory_domains": [ 00:14:09.530 { 00:14:09.530 "dma_device_id": "system", 00:14:09.530 "dma_device_type": 1 00:14:09.530 }, 00:14:09.530 { 00:14:09.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:09.530 "dma_device_type": 2 00:14:09.530 } 00:14:09.530 ], 00:14:09.530 "driver_specific": {} 00:14:09.530 } 00:14:09.530 ] 00:14:09.530 21:13:20 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:09.530 21:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:09.530 21:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:09.530 21:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:09.530 21:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:09.530 21:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:09.530 21:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:09.530 21:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:09.530 21:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:09.530 21:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:09.530 21:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:09.530 21:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:09.530 21:13:20 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:09.790 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:09.790 "name": "Existed_Raid", 00:14:09.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.790 "strip_size_kb": 64, 00:14:09.790 "state": "configuring", 00:14:09.790 "raid_level": "concat", 00:14:09.790 "superblock": false, 00:14:09.790 "num_base_bdevs": 4, 00:14:09.790 "num_base_bdevs_discovered": 1, 00:14:09.790 "num_base_bdevs_operational": 4, 00:14:09.790 "base_bdevs_list": [ 00:14:09.790 { 00:14:09.790 "name": "BaseBdev1", 00:14:09.790 "uuid": "e588e3a3-4225-11ef-aa83-81fbc7dfef58", 00:14:09.790 "is_configured": true, 00:14:09.790 "data_offset": 0, 00:14:09.790 "data_size": 65536 00:14:09.790 }, 00:14:09.790 { 00:14:09.790 "name": "BaseBdev2", 00:14:09.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.790 "is_configured": false, 00:14:09.790 "data_offset": 0, 00:14:09.790 "data_size": 0 00:14:09.790 }, 00:14:09.790 { 00:14:09.790 "name": "BaseBdev3", 00:14:09.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.790 "is_configured": false, 00:14:09.790 "data_offset": 0, 00:14:09.790 "data_size": 0 00:14:09.790 }, 00:14:09.790 { 00:14:09.790 "name": "BaseBdev4", 00:14:09.790 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:09.790 "is_configured": false, 00:14:09.790 "data_offset": 0, 00:14:09.790 "data_size": 0 00:14:09.790 } 00:14:09.790 ] 00:14:09.790 }' 00:14:09.790 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:09.790 21:13:21 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:10.046 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:10.303 [2024-07-14 21:13:21.703124] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:10.303 [2024-07-14 21:13:21.703173] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa72ed434500 name Existed_Raid, state configuring 00:14:10.303 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:10.561 [2024-07-14 21:13:21.971158] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:10.561 [2024-07-14 21:13:21.972163] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:10.561 [2024-07-14 21:13:21.972230] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:10.561 [2024-07-14 21:13:21.972235] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:10.561 [2024-07-14 21:13:21.972254] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:10.561 [2024-07-14 21:13:21.972257] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:10.561 [2024-07-14 21:13:21.972264] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:10.561 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:10.561 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:10.561 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:10.561 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:10.561 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:10.561 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:10.561 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:10.561 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:10.561 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:10.561 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:10.561 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:10.561 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:10.561 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:10.561 21:13:21 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:10.819 21:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:10.819 "name": "Existed_Raid", 00:14:10.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.819 "strip_size_kb": 64, 00:14:10.819 "state": "configuring", 00:14:10.819 "raid_level": "concat", 00:14:10.819 "superblock": false, 00:14:10.819 "num_base_bdevs": 4, 00:14:10.819 "num_base_bdevs_discovered": 1, 00:14:10.819 "num_base_bdevs_operational": 4, 00:14:10.819 "base_bdevs_list": [ 00:14:10.819 { 00:14:10.819 "name": "BaseBdev1", 00:14:10.819 "uuid": "e588e3a3-4225-11ef-aa83-81fbc7dfef58", 00:14:10.819 "is_configured": true, 00:14:10.819 "data_offset": 0, 00:14:10.819 "data_size": 65536 00:14:10.819 }, 00:14:10.819 { 00:14:10.819 "name": "BaseBdev2", 00:14:10.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.819 "is_configured": false, 00:14:10.819 "data_offset": 0, 00:14:10.819 "data_size": 0 00:14:10.819 }, 00:14:10.819 { 00:14:10.819 "name": "BaseBdev3", 00:14:10.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.819 "is_configured": false, 00:14:10.819 "data_offset": 0, 00:14:10.819 "data_size": 0 00:14:10.819 }, 00:14:10.819 { 00:14:10.819 "name": "BaseBdev4", 00:14:10.819 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:10.819 "is_configured": false, 00:14:10.819 "data_offset": 0, 00:14:10.819 "data_size": 0 00:14:10.819 } 00:14:10.819 ] 00:14:10.819 }' 00:14:10.819 21:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:10.819 21:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:11.076 21:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:11.333 [2024-07-14 21:13:22.715324] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:11.333 BaseBdev2 00:14:11.333 21:13:22 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:11.333 21:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:11.333 21:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:11.333 21:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:11.333 21:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:11.333 21:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:11.334 21:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:11.592 21:13:22 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:11.850 [ 00:14:11.850 { 00:14:11.850 "name": "BaseBdev2", 00:14:11.850 "aliases": [ 00:14:11.850 "e6eae4dc-4225-11ef-aa83-81fbc7dfef58" 00:14:11.850 ], 00:14:11.850 "product_name": "Malloc disk", 00:14:11.850 "block_size": 512, 00:14:11.850 "num_blocks": 65536, 00:14:11.850 "uuid": "e6eae4dc-4225-11ef-aa83-81fbc7dfef58", 00:14:11.850 "assigned_rate_limits": { 00:14:11.850 "rw_ios_per_sec": 0, 00:14:11.850 "rw_mbytes_per_sec": 0, 00:14:11.850 "r_mbytes_per_sec": 0, 00:14:11.850 "w_mbytes_per_sec": 0 00:14:11.850 }, 00:14:11.850 "claimed": true, 00:14:11.850 "claim_type": "exclusive_write", 00:14:11.850 "zoned": false, 00:14:11.850 "supported_io_types": { 00:14:11.850 "read": true, 00:14:11.850 "write": true, 00:14:11.850 "unmap": true, 00:14:11.850 "flush": true, 00:14:11.850 "reset": true, 00:14:11.850 "nvme_admin": false, 00:14:11.850 "nvme_io": false, 00:14:11.850 "nvme_io_md": false, 00:14:11.850 "write_zeroes": true, 00:14:11.850 "zcopy": true, 00:14:11.850 "get_zone_info": false, 00:14:11.850 "zone_management": false, 00:14:11.850 "zone_append": false, 00:14:11.850 "compare": false, 00:14:11.850 "compare_and_write": false, 00:14:11.850 "abort": true, 00:14:11.850 "seek_hole": false, 00:14:11.850 "seek_data": false, 00:14:11.850 "copy": true, 00:14:11.850 "nvme_iov_md": false 00:14:11.850 }, 00:14:11.850 "memory_domains": [ 00:14:11.850 { 00:14:11.850 "dma_device_id": "system", 00:14:11.850 "dma_device_type": 1 00:14:11.851 }, 00:14:11.851 { 00:14:11.851 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:11.851 "dma_device_type": 2 00:14:11.851 } 00:14:11.851 ], 00:14:11.851 "driver_specific": {} 00:14:11.851 } 00:14:11.851 ] 00:14:11.851 21:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:11.851 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:11.851 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:11.851 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:11.851 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:11.851 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:11.851 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:11.851 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:11.851 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:11.851 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:11.851 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:11.851 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:11.851 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:11.851 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:11.851 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:12.110 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:12.110 "name": "Existed_Raid", 00:14:12.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.110 "strip_size_kb": 64, 00:14:12.110 "state": "configuring", 00:14:12.110 "raid_level": "concat", 00:14:12.110 "superblock": false, 00:14:12.110 "num_base_bdevs": 4, 00:14:12.110 "num_base_bdevs_discovered": 2, 00:14:12.110 "num_base_bdevs_operational": 4, 00:14:12.110 "base_bdevs_list": [ 00:14:12.110 { 00:14:12.110 "name": "BaseBdev1", 00:14:12.110 "uuid": "e588e3a3-4225-11ef-aa83-81fbc7dfef58", 00:14:12.110 "is_configured": true, 00:14:12.110 "data_offset": 0, 00:14:12.110 "data_size": 65536 00:14:12.110 }, 00:14:12.110 { 00:14:12.110 "name": "BaseBdev2", 00:14:12.110 "uuid": "e6eae4dc-4225-11ef-aa83-81fbc7dfef58", 00:14:12.110 "is_configured": true, 00:14:12.110 "data_offset": 0, 00:14:12.110 "data_size": 65536 00:14:12.110 }, 00:14:12.110 { 00:14:12.110 "name": "BaseBdev3", 00:14:12.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.110 "is_configured": false, 00:14:12.110 "data_offset": 0, 00:14:12.110 "data_size": 0 00:14:12.110 }, 00:14:12.110 { 00:14:12.110 "name": "BaseBdev4", 00:14:12.110 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:12.110 "is_configured": false, 00:14:12.110 "data_offset": 0, 00:14:12.110 "data_size": 0 00:14:12.110 } 00:14:12.110 ] 00:14:12.110 }' 00:14:12.110 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:12.110 21:13:23 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:12.369 21:13:23 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:12.627 [2024-07-14 21:13:24.035354] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:12.627 BaseBdev3 00:14:12.627 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:12.627 21:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:12.627 21:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:12.627 21:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:12.628 21:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:12.628 21:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:12.628 21:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:12.886 21:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:13.145 [ 00:14:13.145 { 00:14:13.145 "name": "BaseBdev3", 00:14:13.145 "aliases": [ 00:14:13.145 "e7b451b8-4225-11ef-aa83-81fbc7dfef58" 00:14:13.145 ], 00:14:13.145 "product_name": "Malloc disk", 00:14:13.145 "block_size": 512, 00:14:13.145 "num_blocks": 65536, 00:14:13.145 "uuid": "e7b451b8-4225-11ef-aa83-81fbc7dfef58", 00:14:13.145 "assigned_rate_limits": { 00:14:13.145 "rw_ios_per_sec": 0, 00:14:13.145 "rw_mbytes_per_sec": 0, 00:14:13.145 "r_mbytes_per_sec": 0, 00:14:13.145 "w_mbytes_per_sec": 0 00:14:13.145 }, 00:14:13.145 "claimed": true, 00:14:13.145 "claim_type": "exclusive_write", 00:14:13.145 "zoned": false, 00:14:13.145 "supported_io_types": { 00:14:13.145 "read": true, 00:14:13.145 "write": true, 00:14:13.145 "unmap": true, 00:14:13.145 "flush": true, 00:14:13.145 "reset": true, 00:14:13.145 "nvme_admin": false, 00:14:13.145 "nvme_io": false, 00:14:13.145 "nvme_io_md": false, 00:14:13.145 "write_zeroes": true, 00:14:13.145 "zcopy": true, 00:14:13.145 "get_zone_info": false, 00:14:13.145 "zone_management": false, 00:14:13.145 "zone_append": false, 00:14:13.145 "compare": false, 00:14:13.145 "compare_and_write": false, 00:14:13.145 "abort": true, 00:14:13.145 "seek_hole": false, 00:14:13.145 "seek_data": false, 00:14:13.145 "copy": true, 00:14:13.145 "nvme_iov_md": false 00:14:13.145 }, 00:14:13.145 "memory_domains": [ 00:14:13.145 { 00:14:13.145 "dma_device_id": "system", 00:14:13.145 "dma_device_type": 1 00:14:13.145 }, 00:14:13.145 { 00:14:13.145 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:13.145 "dma_device_type": 2 00:14:13.145 } 00:14:13.145 ], 00:14:13.145 "driver_specific": {} 00:14:13.145 } 00:14:13.145 ] 00:14:13.145 21:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:13.145 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:13.145 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:13.145 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:13.145 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:13.145 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:13.145 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:13.145 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:13.145 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:13.145 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:13.145 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:13.145 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:13.145 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:13.145 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:13.145 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:13.404 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:13.404 "name": "Existed_Raid", 00:14:13.404 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.404 "strip_size_kb": 64, 00:14:13.404 "state": "configuring", 00:14:13.404 "raid_level": "concat", 00:14:13.405 "superblock": false, 00:14:13.405 "num_base_bdevs": 4, 00:14:13.405 "num_base_bdevs_discovered": 3, 00:14:13.405 "num_base_bdevs_operational": 4, 00:14:13.405 "base_bdevs_list": [ 00:14:13.405 { 00:14:13.405 "name": "BaseBdev1", 00:14:13.405 "uuid": "e588e3a3-4225-11ef-aa83-81fbc7dfef58", 00:14:13.405 "is_configured": true, 00:14:13.405 "data_offset": 0, 00:14:13.405 "data_size": 65536 00:14:13.405 }, 00:14:13.405 { 00:14:13.405 "name": "BaseBdev2", 00:14:13.405 "uuid": "e6eae4dc-4225-11ef-aa83-81fbc7dfef58", 00:14:13.405 "is_configured": true, 00:14:13.405 "data_offset": 0, 00:14:13.405 "data_size": 65536 00:14:13.405 }, 00:14:13.405 { 00:14:13.405 "name": "BaseBdev3", 00:14:13.405 "uuid": "e7b451b8-4225-11ef-aa83-81fbc7dfef58", 00:14:13.405 "is_configured": true, 00:14:13.405 "data_offset": 0, 00:14:13.405 "data_size": 65536 00:14:13.405 }, 00:14:13.405 { 00:14:13.405 "name": "BaseBdev4", 00:14:13.405 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:13.405 "is_configured": false, 00:14:13.405 "data_offset": 0, 00:14:13.405 "data_size": 0 00:14:13.405 } 00:14:13.405 ] 00:14:13.405 }' 00:14:13.405 21:13:24 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:13.405 21:13:24 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:13.664 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:13.923 [2024-07-14 21:13:25.367357] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:13.923 [2024-07-14 21:13:25.367392] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa72ed434a00 00:14:13.923 [2024-07-14 21:13:25.367396] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:13.923 [2024-07-14 21:13:25.367441] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa72ed497e20 00:14:13.923 [2024-07-14 21:13:25.367554] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa72ed434a00 00:14:13.923 [2024-07-14 21:13:25.367560] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xa72ed434a00 00:14:13.923 [2024-07-14 21:13:25.367594] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:13.923 BaseBdev4 00:14:13.923 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:13.923 21:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:13.923 21:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:13.923 21:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:13.923 21:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:13.923 21:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:13.923 21:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:14.182 21:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:14.442 [ 00:14:14.442 { 00:14:14.442 "name": "BaseBdev4", 00:14:14.442 "aliases": [ 00:14:14.442 "e87f9144-4225-11ef-aa83-81fbc7dfef58" 00:14:14.442 ], 00:14:14.442 "product_name": "Malloc disk", 00:14:14.442 "block_size": 512, 00:14:14.442 "num_blocks": 65536, 00:14:14.442 "uuid": "e87f9144-4225-11ef-aa83-81fbc7dfef58", 00:14:14.442 "assigned_rate_limits": { 00:14:14.442 "rw_ios_per_sec": 0, 00:14:14.442 "rw_mbytes_per_sec": 0, 00:14:14.442 "r_mbytes_per_sec": 0, 00:14:14.442 "w_mbytes_per_sec": 0 00:14:14.442 }, 00:14:14.442 "claimed": true, 00:14:14.442 "claim_type": "exclusive_write", 00:14:14.442 "zoned": false, 00:14:14.442 "supported_io_types": { 00:14:14.442 "read": true, 00:14:14.442 "write": true, 00:14:14.442 "unmap": true, 00:14:14.442 "flush": true, 00:14:14.442 "reset": true, 00:14:14.442 "nvme_admin": false, 00:14:14.442 "nvme_io": false, 00:14:14.442 "nvme_io_md": false, 00:14:14.442 "write_zeroes": true, 00:14:14.442 "zcopy": true, 00:14:14.442 "get_zone_info": false, 00:14:14.442 "zone_management": false, 00:14:14.442 "zone_append": false, 00:14:14.442 "compare": false, 00:14:14.442 "compare_and_write": false, 00:14:14.442 "abort": true, 00:14:14.442 "seek_hole": false, 00:14:14.442 "seek_data": false, 00:14:14.442 "copy": true, 00:14:14.442 "nvme_iov_md": false 00:14:14.442 }, 00:14:14.442 "memory_domains": [ 00:14:14.442 { 00:14:14.442 "dma_device_id": "system", 00:14:14.442 "dma_device_type": 1 00:14:14.442 }, 00:14:14.442 { 00:14:14.442 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:14.442 "dma_device_type": 2 00:14:14.442 } 00:14:14.442 ], 00:14:14.442 "driver_specific": {} 00:14:14.442 } 00:14:14.442 ] 00:14:14.442 21:13:25 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:14.442 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:14.442 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:14.442 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:14.442 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:14.442 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:14.442 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:14.442 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:14.442 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:14.442 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:14.442 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:14.442 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:14.442 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:14.442 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:14.442 21:13:25 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:14.701 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:14.701 "name": "Existed_Raid", 00:14:14.701 "uuid": "e87f9903-4225-11ef-aa83-81fbc7dfef58", 00:14:14.701 "strip_size_kb": 64, 00:14:14.701 "state": "online", 00:14:14.701 "raid_level": "concat", 00:14:14.701 "superblock": false, 00:14:14.701 "num_base_bdevs": 4, 00:14:14.701 "num_base_bdevs_discovered": 4, 00:14:14.701 "num_base_bdevs_operational": 4, 00:14:14.701 "base_bdevs_list": [ 00:14:14.701 { 00:14:14.701 "name": "BaseBdev1", 00:14:14.701 "uuid": "e588e3a3-4225-11ef-aa83-81fbc7dfef58", 00:14:14.701 "is_configured": true, 00:14:14.701 "data_offset": 0, 00:14:14.701 "data_size": 65536 00:14:14.701 }, 00:14:14.701 { 00:14:14.701 "name": "BaseBdev2", 00:14:14.701 "uuid": "e6eae4dc-4225-11ef-aa83-81fbc7dfef58", 00:14:14.701 "is_configured": true, 00:14:14.701 "data_offset": 0, 00:14:14.701 "data_size": 65536 00:14:14.701 }, 00:14:14.701 { 00:14:14.701 "name": "BaseBdev3", 00:14:14.701 "uuid": "e7b451b8-4225-11ef-aa83-81fbc7dfef58", 00:14:14.701 "is_configured": true, 00:14:14.701 "data_offset": 0, 00:14:14.701 "data_size": 65536 00:14:14.701 }, 00:14:14.701 { 00:14:14.701 "name": "BaseBdev4", 00:14:14.701 "uuid": "e87f9144-4225-11ef-aa83-81fbc7dfef58", 00:14:14.701 "is_configured": true, 00:14:14.701 "data_offset": 0, 00:14:14.701 "data_size": 65536 00:14:14.701 } 00:14:14.701 ] 00:14:14.701 }' 00:14:14.701 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:14.701 21:13:26 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:14.960 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:14.960 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:14.960 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:14.960 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:14.960 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:14.960 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:14.960 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:14.960 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:15.219 [2024-07-14 21:13:26.711308] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:15.219 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:15.219 "name": "Existed_Raid", 00:14:15.219 "aliases": [ 00:14:15.219 "e87f9903-4225-11ef-aa83-81fbc7dfef58" 00:14:15.219 ], 00:14:15.219 "product_name": "Raid Volume", 00:14:15.219 "block_size": 512, 00:14:15.219 "num_blocks": 262144, 00:14:15.219 "uuid": "e87f9903-4225-11ef-aa83-81fbc7dfef58", 00:14:15.219 "assigned_rate_limits": { 00:14:15.219 "rw_ios_per_sec": 0, 00:14:15.219 "rw_mbytes_per_sec": 0, 00:14:15.219 "r_mbytes_per_sec": 0, 00:14:15.219 "w_mbytes_per_sec": 0 00:14:15.219 }, 00:14:15.219 "claimed": false, 00:14:15.219 "zoned": false, 00:14:15.219 "supported_io_types": { 00:14:15.219 "read": true, 00:14:15.219 "write": true, 00:14:15.219 "unmap": true, 00:14:15.219 "flush": true, 00:14:15.219 "reset": true, 00:14:15.219 "nvme_admin": false, 00:14:15.219 "nvme_io": false, 00:14:15.219 "nvme_io_md": false, 00:14:15.219 "write_zeroes": true, 00:14:15.219 "zcopy": false, 00:14:15.219 "get_zone_info": false, 00:14:15.219 "zone_management": false, 00:14:15.219 "zone_append": false, 00:14:15.219 "compare": false, 00:14:15.219 "compare_and_write": false, 00:14:15.219 "abort": false, 00:14:15.219 "seek_hole": false, 00:14:15.219 "seek_data": false, 00:14:15.219 "copy": false, 00:14:15.219 "nvme_iov_md": false 00:14:15.219 }, 00:14:15.219 "memory_domains": [ 00:14:15.219 { 00:14:15.219 "dma_device_id": "system", 00:14:15.219 "dma_device_type": 1 00:14:15.219 }, 00:14:15.219 { 00:14:15.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.219 "dma_device_type": 2 00:14:15.219 }, 00:14:15.219 { 00:14:15.219 "dma_device_id": "system", 00:14:15.219 "dma_device_type": 1 00:14:15.219 }, 00:14:15.219 { 00:14:15.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.219 "dma_device_type": 2 00:14:15.219 }, 00:14:15.219 { 00:14:15.219 "dma_device_id": "system", 00:14:15.219 "dma_device_type": 1 00:14:15.219 }, 00:14:15.219 { 00:14:15.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.219 "dma_device_type": 2 00:14:15.219 }, 00:14:15.219 { 00:14:15.219 "dma_device_id": "system", 00:14:15.219 "dma_device_type": 1 00:14:15.219 }, 00:14:15.219 { 00:14:15.219 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.219 "dma_device_type": 2 00:14:15.219 } 00:14:15.219 ], 00:14:15.219 "driver_specific": { 00:14:15.219 "raid": { 00:14:15.219 "uuid": "e87f9903-4225-11ef-aa83-81fbc7dfef58", 00:14:15.219 "strip_size_kb": 64, 00:14:15.219 "state": "online", 00:14:15.219 "raid_level": "concat", 00:14:15.219 "superblock": false, 00:14:15.219 "num_base_bdevs": 4, 00:14:15.220 "num_base_bdevs_discovered": 4, 00:14:15.220 "num_base_bdevs_operational": 4, 00:14:15.220 "base_bdevs_list": [ 00:14:15.220 { 00:14:15.220 "name": "BaseBdev1", 00:14:15.220 "uuid": "e588e3a3-4225-11ef-aa83-81fbc7dfef58", 00:14:15.220 "is_configured": true, 00:14:15.220 "data_offset": 0, 00:14:15.220 "data_size": 65536 00:14:15.220 }, 00:14:15.220 { 00:14:15.220 "name": "BaseBdev2", 00:14:15.220 "uuid": "e6eae4dc-4225-11ef-aa83-81fbc7dfef58", 00:14:15.220 "is_configured": true, 00:14:15.220 "data_offset": 0, 00:14:15.220 "data_size": 65536 00:14:15.220 }, 00:14:15.220 { 00:14:15.220 "name": "BaseBdev3", 00:14:15.220 "uuid": "e7b451b8-4225-11ef-aa83-81fbc7dfef58", 00:14:15.220 "is_configured": true, 00:14:15.220 "data_offset": 0, 00:14:15.220 "data_size": 65536 00:14:15.220 }, 00:14:15.220 { 00:14:15.220 "name": "BaseBdev4", 00:14:15.220 "uuid": "e87f9144-4225-11ef-aa83-81fbc7dfef58", 00:14:15.220 "is_configured": true, 00:14:15.220 "data_offset": 0, 00:14:15.220 "data_size": 65536 00:14:15.220 } 00:14:15.220 ] 00:14:15.220 } 00:14:15.220 } 00:14:15.220 }' 00:14:15.220 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:15.220 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:15.220 BaseBdev2 00:14:15.220 BaseBdev3 00:14:15.220 BaseBdev4' 00:14:15.220 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:15.220 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:15.220 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:15.477 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:15.477 "name": "BaseBdev1", 00:14:15.477 "aliases": [ 00:14:15.477 "e588e3a3-4225-11ef-aa83-81fbc7dfef58" 00:14:15.477 ], 00:14:15.477 "product_name": "Malloc disk", 00:14:15.477 "block_size": 512, 00:14:15.477 "num_blocks": 65536, 00:14:15.477 "uuid": "e588e3a3-4225-11ef-aa83-81fbc7dfef58", 00:14:15.477 "assigned_rate_limits": { 00:14:15.477 "rw_ios_per_sec": 0, 00:14:15.477 "rw_mbytes_per_sec": 0, 00:14:15.477 "r_mbytes_per_sec": 0, 00:14:15.477 "w_mbytes_per_sec": 0 00:14:15.477 }, 00:14:15.477 "claimed": true, 00:14:15.477 "claim_type": "exclusive_write", 00:14:15.477 "zoned": false, 00:14:15.477 "supported_io_types": { 00:14:15.477 "read": true, 00:14:15.477 "write": true, 00:14:15.477 "unmap": true, 00:14:15.477 "flush": true, 00:14:15.477 "reset": true, 00:14:15.477 "nvme_admin": false, 00:14:15.477 "nvme_io": false, 00:14:15.477 "nvme_io_md": false, 00:14:15.477 "write_zeroes": true, 00:14:15.478 "zcopy": true, 00:14:15.478 "get_zone_info": false, 00:14:15.478 "zone_management": false, 00:14:15.478 "zone_append": false, 00:14:15.478 "compare": false, 00:14:15.478 "compare_and_write": false, 00:14:15.478 "abort": true, 00:14:15.478 "seek_hole": false, 00:14:15.478 "seek_data": false, 00:14:15.478 "copy": true, 00:14:15.478 "nvme_iov_md": false 00:14:15.478 }, 00:14:15.478 "memory_domains": [ 00:14:15.478 { 00:14:15.478 "dma_device_id": "system", 00:14:15.478 "dma_device_type": 1 00:14:15.478 }, 00:14:15.478 { 00:14:15.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.478 "dma_device_type": 2 00:14:15.478 } 00:14:15.478 ], 00:14:15.478 "driver_specific": {} 00:14:15.478 }' 00:14:15.478 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:15.478 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:15.478 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:15.478 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.478 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.478 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:15.478 21:13:26 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.478 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.478 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:15.478 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:15.478 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:15.478 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:15.478 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:15.735 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:15.735 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:15.993 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:15.993 "name": "BaseBdev2", 00:14:15.993 "aliases": [ 00:14:15.993 "e6eae4dc-4225-11ef-aa83-81fbc7dfef58" 00:14:15.993 ], 00:14:15.993 "product_name": "Malloc disk", 00:14:15.993 "block_size": 512, 00:14:15.993 "num_blocks": 65536, 00:14:15.993 "uuid": "e6eae4dc-4225-11ef-aa83-81fbc7dfef58", 00:14:15.993 "assigned_rate_limits": { 00:14:15.993 "rw_ios_per_sec": 0, 00:14:15.993 "rw_mbytes_per_sec": 0, 00:14:15.994 "r_mbytes_per_sec": 0, 00:14:15.994 "w_mbytes_per_sec": 0 00:14:15.994 }, 00:14:15.994 "claimed": true, 00:14:15.994 "claim_type": "exclusive_write", 00:14:15.994 "zoned": false, 00:14:15.994 "supported_io_types": { 00:14:15.994 "read": true, 00:14:15.994 "write": true, 00:14:15.994 "unmap": true, 00:14:15.994 "flush": true, 00:14:15.994 "reset": true, 00:14:15.994 "nvme_admin": false, 00:14:15.994 "nvme_io": false, 00:14:15.994 "nvme_io_md": false, 00:14:15.994 "write_zeroes": true, 00:14:15.994 "zcopy": true, 00:14:15.994 "get_zone_info": false, 00:14:15.994 "zone_management": false, 00:14:15.994 "zone_append": false, 00:14:15.994 "compare": false, 00:14:15.994 "compare_and_write": false, 00:14:15.994 "abort": true, 00:14:15.994 "seek_hole": false, 00:14:15.994 "seek_data": false, 00:14:15.994 "copy": true, 00:14:15.994 "nvme_iov_md": false 00:14:15.994 }, 00:14:15.994 "memory_domains": [ 00:14:15.994 { 00:14:15.994 "dma_device_id": "system", 00:14:15.994 "dma_device_type": 1 00:14:15.994 }, 00:14:15.994 { 00:14:15.994 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:15.994 "dma_device_type": 2 00:14:15.994 } 00:14:15.994 ], 00:14:15.994 "driver_specific": {} 00:14:15.994 }' 00:14:15.994 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:15.994 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:15.994 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:15.994 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.994 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:15.994 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:15.994 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.994 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:15.994 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:15.994 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:15.994 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:15.994 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:15.994 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:15.994 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:15.994 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:16.252 "name": "BaseBdev3", 00:14:16.252 "aliases": [ 00:14:16.252 "e7b451b8-4225-11ef-aa83-81fbc7dfef58" 00:14:16.252 ], 00:14:16.252 "product_name": "Malloc disk", 00:14:16.252 "block_size": 512, 00:14:16.252 "num_blocks": 65536, 00:14:16.252 "uuid": "e7b451b8-4225-11ef-aa83-81fbc7dfef58", 00:14:16.252 "assigned_rate_limits": { 00:14:16.252 "rw_ios_per_sec": 0, 00:14:16.252 "rw_mbytes_per_sec": 0, 00:14:16.252 "r_mbytes_per_sec": 0, 00:14:16.252 "w_mbytes_per_sec": 0 00:14:16.252 }, 00:14:16.252 "claimed": true, 00:14:16.252 "claim_type": "exclusive_write", 00:14:16.252 "zoned": false, 00:14:16.252 "supported_io_types": { 00:14:16.252 "read": true, 00:14:16.252 "write": true, 00:14:16.252 "unmap": true, 00:14:16.252 "flush": true, 00:14:16.252 "reset": true, 00:14:16.252 "nvme_admin": false, 00:14:16.252 "nvme_io": false, 00:14:16.252 "nvme_io_md": false, 00:14:16.252 "write_zeroes": true, 00:14:16.252 "zcopy": true, 00:14:16.252 "get_zone_info": false, 00:14:16.252 "zone_management": false, 00:14:16.252 "zone_append": false, 00:14:16.252 "compare": false, 00:14:16.252 "compare_and_write": false, 00:14:16.252 "abort": true, 00:14:16.252 "seek_hole": false, 00:14:16.252 "seek_data": false, 00:14:16.252 "copy": true, 00:14:16.252 "nvme_iov_md": false 00:14:16.252 }, 00:14:16.252 "memory_domains": [ 00:14:16.252 { 00:14:16.252 "dma_device_id": "system", 00:14:16.252 "dma_device_type": 1 00:14:16.252 }, 00:14:16.252 { 00:14:16.252 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.252 "dma_device_type": 2 00:14:16.252 } 00:14:16.252 ], 00:14:16.252 "driver_specific": {} 00:14:16.252 }' 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:16.252 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:16.511 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:16.511 "name": "BaseBdev4", 00:14:16.511 "aliases": [ 00:14:16.511 "e87f9144-4225-11ef-aa83-81fbc7dfef58" 00:14:16.511 ], 00:14:16.511 "product_name": "Malloc disk", 00:14:16.511 "block_size": 512, 00:14:16.511 "num_blocks": 65536, 00:14:16.511 "uuid": "e87f9144-4225-11ef-aa83-81fbc7dfef58", 00:14:16.511 "assigned_rate_limits": { 00:14:16.511 "rw_ios_per_sec": 0, 00:14:16.511 "rw_mbytes_per_sec": 0, 00:14:16.511 "r_mbytes_per_sec": 0, 00:14:16.511 "w_mbytes_per_sec": 0 00:14:16.511 }, 00:14:16.511 "claimed": true, 00:14:16.511 "claim_type": "exclusive_write", 00:14:16.511 "zoned": false, 00:14:16.511 "supported_io_types": { 00:14:16.511 "read": true, 00:14:16.511 "write": true, 00:14:16.511 "unmap": true, 00:14:16.511 "flush": true, 00:14:16.511 "reset": true, 00:14:16.511 "nvme_admin": false, 00:14:16.511 "nvme_io": false, 00:14:16.511 "nvme_io_md": false, 00:14:16.511 "write_zeroes": true, 00:14:16.511 "zcopy": true, 00:14:16.511 "get_zone_info": false, 00:14:16.511 "zone_management": false, 00:14:16.511 "zone_append": false, 00:14:16.511 "compare": false, 00:14:16.511 "compare_and_write": false, 00:14:16.511 "abort": true, 00:14:16.511 "seek_hole": false, 00:14:16.511 "seek_data": false, 00:14:16.511 "copy": true, 00:14:16.511 "nvme_iov_md": false 00:14:16.511 }, 00:14:16.511 "memory_domains": [ 00:14:16.511 { 00:14:16.511 "dma_device_id": "system", 00:14:16.511 "dma_device_type": 1 00:14:16.511 }, 00:14:16.511 { 00:14:16.511 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:16.511 "dma_device_type": 2 00:14:16.511 } 00:14:16.511 ], 00:14:16.511 "driver_specific": {} 00:14:16.511 }' 00:14:16.511 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.511 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:16.511 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:16.511 21:13:27 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.511 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:16.511 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:16.511 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.511 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:16.511 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:16.511 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.511 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:16.511 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:16.511 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:16.769 [2024-07-14 21:13:28.307275] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:16.769 [2024-07-14 21:13:28.307306] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:16.769 [2024-07-14 21:13:28.307346] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@215 -- # return 1 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:17.027 "name": "Existed_Raid", 00:14:17.027 "uuid": "e87f9903-4225-11ef-aa83-81fbc7dfef58", 00:14:17.027 "strip_size_kb": 64, 00:14:17.027 "state": "offline", 00:14:17.027 "raid_level": "concat", 00:14:17.027 "superblock": false, 00:14:17.027 "num_base_bdevs": 4, 00:14:17.027 "num_base_bdevs_discovered": 3, 00:14:17.027 "num_base_bdevs_operational": 3, 00:14:17.027 "base_bdevs_list": [ 00:14:17.027 { 00:14:17.027 "name": null, 00:14:17.027 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:17.027 "is_configured": false, 00:14:17.027 "data_offset": 0, 00:14:17.027 "data_size": 65536 00:14:17.027 }, 00:14:17.027 { 00:14:17.027 "name": "BaseBdev2", 00:14:17.027 "uuid": "e6eae4dc-4225-11ef-aa83-81fbc7dfef58", 00:14:17.027 "is_configured": true, 00:14:17.027 "data_offset": 0, 00:14:17.027 "data_size": 65536 00:14:17.027 }, 00:14:17.027 { 00:14:17.027 "name": "BaseBdev3", 00:14:17.027 "uuid": "e7b451b8-4225-11ef-aa83-81fbc7dfef58", 00:14:17.027 "is_configured": true, 00:14:17.027 "data_offset": 0, 00:14:17.027 "data_size": 65536 00:14:17.027 }, 00:14:17.027 { 00:14:17.027 "name": "BaseBdev4", 00:14:17.027 "uuid": "e87f9144-4225-11ef-aa83-81fbc7dfef58", 00:14:17.027 "is_configured": true, 00:14:17.027 "data_offset": 0, 00:14:17.027 "data_size": 65536 00:14:17.027 } 00:14:17.027 ] 00:14:17.027 }' 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:17.027 21:13:28 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:17.593 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:17.593 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:17.593 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.593 21:13:28 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:17.852 21:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:17.852 21:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:17.852 21:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:17.852 [2024-07-14 21:13:29.360203] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:17.852 21:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:17.852 21:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:17.852 21:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:17.852 21:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:18.418 21:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:18.418 21:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.418 21:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:18.418 [2024-07-14 21:13:29.916936] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:18.418 21:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:18.418 21:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:18.418 21:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.418 21:13:29 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:18.677 21:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:18.677 21:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:18.677 21:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:18.935 [2024-07-14 21:13:30.421877] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:18.935 [2024-07-14 21:13:30.421916] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa72ed434a00 name Existed_Raid, state offline 00:14:18.935 21:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:18.935 21:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:18.935 21:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:18.935 21:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:19.194 21:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:19.194 21:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:19.194 21:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:14:19.194 21:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:19.194 21:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:19.194 21:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:19.451 BaseBdev2 00:14:19.452 21:13:30 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:19.452 21:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:19.452 21:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:19.452 21:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:19.452 21:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:19.452 21:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:19.452 21:13:30 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:19.709 21:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:19.967 [ 00:14:19.967 { 00:14:19.967 "name": "BaseBdev2", 00:14:19.967 "aliases": [ 00:14:19.967 "ebcb80fc-4225-11ef-aa83-81fbc7dfef58" 00:14:19.967 ], 00:14:19.967 "product_name": "Malloc disk", 00:14:19.967 "block_size": 512, 00:14:19.967 "num_blocks": 65536, 00:14:19.967 "uuid": "ebcb80fc-4225-11ef-aa83-81fbc7dfef58", 00:14:19.967 "assigned_rate_limits": { 00:14:19.967 "rw_ios_per_sec": 0, 00:14:19.967 "rw_mbytes_per_sec": 0, 00:14:19.967 "r_mbytes_per_sec": 0, 00:14:19.967 "w_mbytes_per_sec": 0 00:14:19.967 }, 00:14:19.967 "claimed": false, 00:14:19.967 "zoned": false, 00:14:19.967 "supported_io_types": { 00:14:19.967 "read": true, 00:14:19.967 "write": true, 00:14:19.967 "unmap": true, 00:14:19.967 "flush": true, 00:14:19.967 "reset": true, 00:14:19.967 "nvme_admin": false, 00:14:19.967 "nvme_io": false, 00:14:19.967 "nvme_io_md": false, 00:14:19.967 "write_zeroes": true, 00:14:19.967 "zcopy": true, 00:14:19.967 "get_zone_info": false, 00:14:19.967 "zone_management": false, 00:14:19.967 "zone_append": false, 00:14:19.967 "compare": false, 00:14:19.967 "compare_and_write": false, 00:14:19.967 "abort": true, 00:14:19.967 "seek_hole": false, 00:14:19.967 "seek_data": false, 00:14:19.967 "copy": true, 00:14:19.967 "nvme_iov_md": false 00:14:19.967 }, 00:14:19.967 "memory_domains": [ 00:14:19.967 { 00:14:19.967 "dma_device_id": "system", 00:14:19.967 "dma_device_type": 1 00:14:19.967 }, 00:14:19.967 { 00:14:19.967 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:19.967 "dma_device_type": 2 00:14:19.967 } 00:14:19.967 ], 00:14:19.967 "driver_specific": {} 00:14:19.967 } 00:14:19.967 ] 00:14:19.967 21:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:19.967 21:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:19.967 21:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:19.967 21:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:20.225 BaseBdev3 00:14:20.225 21:13:31 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:20.225 21:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:20.225 21:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:20.225 21:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:20.225 21:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:20.225 21:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:20.225 21:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:20.499 21:13:31 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:20.774 [ 00:14:20.774 { 00:14:20.774 "name": "BaseBdev3", 00:14:20.774 "aliases": [ 00:14:20.774 "ec3bcf61-4225-11ef-aa83-81fbc7dfef58" 00:14:20.774 ], 00:14:20.774 "product_name": "Malloc disk", 00:14:20.774 "block_size": 512, 00:14:20.774 "num_blocks": 65536, 00:14:20.774 "uuid": "ec3bcf61-4225-11ef-aa83-81fbc7dfef58", 00:14:20.774 "assigned_rate_limits": { 00:14:20.774 "rw_ios_per_sec": 0, 00:14:20.774 "rw_mbytes_per_sec": 0, 00:14:20.774 "r_mbytes_per_sec": 0, 00:14:20.774 "w_mbytes_per_sec": 0 00:14:20.774 }, 00:14:20.774 "claimed": false, 00:14:20.774 "zoned": false, 00:14:20.774 "supported_io_types": { 00:14:20.774 "read": true, 00:14:20.774 "write": true, 00:14:20.774 "unmap": true, 00:14:20.774 "flush": true, 00:14:20.774 "reset": true, 00:14:20.774 "nvme_admin": false, 00:14:20.774 "nvme_io": false, 00:14:20.774 "nvme_io_md": false, 00:14:20.774 "write_zeroes": true, 00:14:20.774 "zcopy": true, 00:14:20.774 "get_zone_info": false, 00:14:20.774 "zone_management": false, 00:14:20.774 "zone_append": false, 00:14:20.774 "compare": false, 00:14:20.774 "compare_and_write": false, 00:14:20.774 "abort": true, 00:14:20.774 "seek_hole": false, 00:14:20.774 "seek_data": false, 00:14:20.774 "copy": true, 00:14:20.774 "nvme_iov_md": false 00:14:20.774 }, 00:14:20.774 "memory_domains": [ 00:14:20.774 { 00:14:20.774 "dma_device_id": "system", 00:14:20.774 "dma_device_type": 1 00:14:20.774 }, 00:14:20.774 { 00:14:20.774 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:20.774 "dma_device_type": 2 00:14:20.774 } 00:14:20.774 ], 00:14:20.774 "driver_specific": {} 00:14:20.774 } 00:14:20.774 ] 00:14:20.774 21:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:20.774 21:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:20.774 21:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:20.774 21:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:21.032 BaseBdev4 00:14:21.032 21:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:14:21.032 21:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:21.032 21:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:21.032 21:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:21.032 21:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:21.032 21:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:21.032 21:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:21.291 21:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:21.549 [ 00:14:21.549 { 00:14:21.549 "name": "BaseBdev4", 00:14:21.549 "aliases": [ 00:14:21.549 "ecadf2b5-4225-11ef-aa83-81fbc7dfef58" 00:14:21.549 ], 00:14:21.549 "product_name": "Malloc disk", 00:14:21.549 "block_size": 512, 00:14:21.549 "num_blocks": 65536, 00:14:21.549 "uuid": "ecadf2b5-4225-11ef-aa83-81fbc7dfef58", 00:14:21.549 "assigned_rate_limits": { 00:14:21.549 "rw_ios_per_sec": 0, 00:14:21.549 "rw_mbytes_per_sec": 0, 00:14:21.549 "r_mbytes_per_sec": 0, 00:14:21.549 "w_mbytes_per_sec": 0 00:14:21.549 }, 00:14:21.549 "claimed": false, 00:14:21.549 "zoned": false, 00:14:21.549 "supported_io_types": { 00:14:21.549 "read": true, 00:14:21.549 "write": true, 00:14:21.549 "unmap": true, 00:14:21.549 "flush": true, 00:14:21.549 "reset": true, 00:14:21.549 "nvme_admin": false, 00:14:21.549 "nvme_io": false, 00:14:21.549 "nvme_io_md": false, 00:14:21.549 "write_zeroes": true, 00:14:21.549 "zcopy": true, 00:14:21.549 "get_zone_info": false, 00:14:21.549 "zone_management": false, 00:14:21.549 "zone_append": false, 00:14:21.549 "compare": false, 00:14:21.549 "compare_and_write": false, 00:14:21.549 "abort": true, 00:14:21.549 "seek_hole": false, 00:14:21.549 "seek_data": false, 00:14:21.549 "copy": true, 00:14:21.549 "nvme_iov_md": false 00:14:21.549 }, 00:14:21.549 "memory_domains": [ 00:14:21.549 { 00:14:21.549 "dma_device_id": "system", 00:14:21.549 "dma_device_type": 1 00:14:21.549 }, 00:14:21.549 { 00:14:21.549 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:21.549 "dma_device_type": 2 00:14:21.549 } 00:14:21.549 ], 00:14:21.549 "driver_specific": {} 00:14:21.549 } 00:14:21.549 ] 00:14:21.549 21:13:32 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:21.549 21:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:21.549 21:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:21.549 21:13:32 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:21.549 [2024-07-14 21:13:33.046094] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:21.549 [2024-07-14 21:13:33.046183] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:21.549 [2024-07-14 21:13:33.046190] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:21.549 [2024-07-14 21:13:33.046493] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:21.549 [2024-07-14 21:13:33.046502] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:21.549 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:21.549 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:21.549 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:21.549 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:21.549 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:21.549 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:21.549 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:21.549 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:21.549 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:21.549 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:21.549 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:21.549 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:21.807 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:21.807 "name": "Existed_Raid", 00:14:21.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.807 "strip_size_kb": 64, 00:14:21.807 "state": "configuring", 00:14:21.807 "raid_level": "concat", 00:14:21.807 "superblock": false, 00:14:21.807 "num_base_bdevs": 4, 00:14:21.807 "num_base_bdevs_discovered": 3, 00:14:21.807 "num_base_bdevs_operational": 4, 00:14:21.807 "base_bdevs_list": [ 00:14:21.807 { 00:14:21.807 "name": "BaseBdev1", 00:14:21.807 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:21.807 "is_configured": false, 00:14:21.807 "data_offset": 0, 00:14:21.807 "data_size": 0 00:14:21.807 }, 00:14:21.807 { 00:14:21.807 "name": "BaseBdev2", 00:14:21.807 "uuid": "ebcb80fc-4225-11ef-aa83-81fbc7dfef58", 00:14:21.807 "is_configured": true, 00:14:21.807 "data_offset": 0, 00:14:21.807 "data_size": 65536 00:14:21.807 }, 00:14:21.807 { 00:14:21.807 "name": "BaseBdev3", 00:14:21.807 "uuid": "ec3bcf61-4225-11ef-aa83-81fbc7dfef58", 00:14:21.807 "is_configured": true, 00:14:21.807 "data_offset": 0, 00:14:21.807 "data_size": 65536 00:14:21.807 }, 00:14:21.807 { 00:14:21.807 "name": "BaseBdev4", 00:14:21.807 "uuid": "ecadf2b5-4225-11ef-aa83-81fbc7dfef58", 00:14:21.807 "is_configured": true, 00:14:21.807 "data_offset": 0, 00:14:21.807 "data_size": 65536 00:14:21.807 } 00:14:21.807 ] 00:14:21.807 }' 00:14:21.808 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:21.808 21:13:33 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:22.065 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:22.323 [2024-07-14 21:13:33.738092] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:22.323 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:22.323 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:22.323 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:22.323 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:22.323 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:22.323 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:22.323 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:22.323 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:22.323 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:22.323 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:22.323 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:22.323 21:13:33 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:22.581 21:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:22.581 "name": "Existed_Raid", 00:14:22.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.581 "strip_size_kb": 64, 00:14:22.581 "state": "configuring", 00:14:22.581 "raid_level": "concat", 00:14:22.581 "superblock": false, 00:14:22.581 "num_base_bdevs": 4, 00:14:22.581 "num_base_bdevs_discovered": 2, 00:14:22.581 "num_base_bdevs_operational": 4, 00:14:22.581 "base_bdevs_list": [ 00:14:22.581 { 00:14:22.581 "name": "BaseBdev1", 00:14:22.581 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:22.581 "is_configured": false, 00:14:22.581 "data_offset": 0, 00:14:22.581 "data_size": 0 00:14:22.581 }, 00:14:22.581 { 00:14:22.581 "name": null, 00:14:22.581 "uuid": "ebcb80fc-4225-11ef-aa83-81fbc7dfef58", 00:14:22.581 "is_configured": false, 00:14:22.581 "data_offset": 0, 00:14:22.581 "data_size": 65536 00:14:22.581 }, 00:14:22.581 { 00:14:22.581 "name": "BaseBdev3", 00:14:22.581 "uuid": "ec3bcf61-4225-11ef-aa83-81fbc7dfef58", 00:14:22.581 "is_configured": true, 00:14:22.581 "data_offset": 0, 00:14:22.581 "data_size": 65536 00:14:22.581 }, 00:14:22.581 { 00:14:22.581 "name": "BaseBdev4", 00:14:22.581 "uuid": "ecadf2b5-4225-11ef-aa83-81fbc7dfef58", 00:14:22.581 "is_configured": true, 00:14:22.581 "data_offset": 0, 00:14:22.581 "data_size": 65536 00:14:22.581 } 00:14:22.581 ] 00:14:22.581 }' 00:14:22.581 21:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:22.581 21:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:23.146 21:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.146 21:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:23.146 21:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:23.146 21:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:23.405 [2024-07-14 21:13:34.782172] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:23.405 BaseBdev1 00:14:23.405 21:13:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:23.405 21:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:23.405 21:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:23.405 21:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:23.405 21:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:23.405 21:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:23.405 21:13:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:23.662 21:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:23.920 [ 00:14:23.920 { 00:14:23.920 "name": "BaseBdev1", 00:14:23.920 "aliases": [ 00:14:23.920 "ee1c2a14-4225-11ef-aa83-81fbc7dfef58" 00:14:23.920 ], 00:14:23.920 "product_name": "Malloc disk", 00:14:23.920 "block_size": 512, 00:14:23.920 "num_blocks": 65536, 00:14:23.920 "uuid": "ee1c2a14-4225-11ef-aa83-81fbc7dfef58", 00:14:23.920 "assigned_rate_limits": { 00:14:23.920 "rw_ios_per_sec": 0, 00:14:23.920 "rw_mbytes_per_sec": 0, 00:14:23.920 "r_mbytes_per_sec": 0, 00:14:23.920 "w_mbytes_per_sec": 0 00:14:23.920 }, 00:14:23.920 "claimed": true, 00:14:23.920 "claim_type": "exclusive_write", 00:14:23.920 "zoned": false, 00:14:23.920 "supported_io_types": { 00:14:23.920 "read": true, 00:14:23.920 "write": true, 00:14:23.920 "unmap": true, 00:14:23.920 "flush": true, 00:14:23.920 "reset": true, 00:14:23.920 "nvme_admin": false, 00:14:23.920 "nvme_io": false, 00:14:23.920 "nvme_io_md": false, 00:14:23.920 "write_zeroes": true, 00:14:23.920 "zcopy": true, 00:14:23.920 "get_zone_info": false, 00:14:23.920 "zone_management": false, 00:14:23.920 "zone_append": false, 00:14:23.920 "compare": false, 00:14:23.920 "compare_and_write": false, 00:14:23.920 "abort": true, 00:14:23.920 "seek_hole": false, 00:14:23.920 "seek_data": false, 00:14:23.920 "copy": true, 00:14:23.920 "nvme_iov_md": false 00:14:23.920 }, 00:14:23.920 "memory_domains": [ 00:14:23.920 { 00:14:23.920 "dma_device_id": "system", 00:14:23.920 "dma_device_type": 1 00:14:23.920 }, 00:14:23.920 { 00:14:23.920 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:23.920 "dma_device_type": 2 00:14:23.920 } 00:14:23.920 ], 00:14:23.920 "driver_specific": {} 00:14:23.920 } 00:14:23.920 ] 00:14:23.920 21:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:23.920 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:23.920 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:23.920 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:23.920 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:23.920 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:23.920 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:23.920 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:23.920 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:23.920 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:23.920 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:23.920 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:23.920 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:24.178 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:24.178 "name": "Existed_Raid", 00:14:24.178 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:24.178 "strip_size_kb": 64, 00:14:24.178 "state": "configuring", 00:14:24.178 "raid_level": "concat", 00:14:24.178 "superblock": false, 00:14:24.178 "num_base_bdevs": 4, 00:14:24.178 "num_base_bdevs_discovered": 3, 00:14:24.178 "num_base_bdevs_operational": 4, 00:14:24.178 "base_bdevs_list": [ 00:14:24.178 { 00:14:24.178 "name": "BaseBdev1", 00:14:24.178 "uuid": "ee1c2a14-4225-11ef-aa83-81fbc7dfef58", 00:14:24.178 "is_configured": true, 00:14:24.178 "data_offset": 0, 00:14:24.178 "data_size": 65536 00:14:24.178 }, 00:14:24.178 { 00:14:24.178 "name": null, 00:14:24.178 "uuid": "ebcb80fc-4225-11ef-aa83-81fbc7dfef58", 00:14:24.178 "is_configured": false, 00:14:24.178 "data_offset": 0, 00:14:24.178 "data_size": 65536 00:14:24.178 }, 00:14:24.178 { 00:14:24.178 "name": "BaseBdev3", 00:14:24.178 "uuid": "ec3bcf61-4225-11ef-aa83-81fbc7dfef58", 00:14:24.178 "is_configured": true, 00:14:24.178 "data_offset": 0, 00:14:24.178 "data_size": 65536 00:14:24.178 }, 00:14:24.178 { 00:14:24.178 "name": "BaseBdev4", 00:14:24.178 "uuid": "ecadf2b5-4225-11ef-aa83-81fbc7dfef58", 00:14:24.178 "is_configured": true, 00:14:24.178 "data_offset": 0, 00:14:24.178 "data_size": 65536 00:14:24.178 } 00:14:24.178 ] 00:14:24.178 }' 00:14:24.178 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:24.178 21:13:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:24.437 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.437 21:13:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:24.696 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:24.696 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:24.954 [2024-07-14 21:13:36.362115] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:24.954 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:24.954 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:24.954 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:24.954 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:24.954 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:24.954 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:24.954 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:24.954 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:24.954 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:24.954 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:24.954 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:24.954 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:25.213 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:25.213 "name": "Existed_Raid", 00:14:25.213 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:25.213 "strip_size_kb": 64, 00:14:25.213 "state": "configuring", 00:14:25.213 "raid_level": "concat", 00:14:25.213 "superblock": false, 00:14:25.213 "num_base_bdevs": 4, 00:14:25.213 "num_base_bdevs_discovered": 2, 00:14:25.213 "num_base_bdevs_operational": 4, 00:14:25.213 "base_bdevs_list": [ 00:14:25.213 { 00:14:25.213 "name": "BaseBdev1", 00:14:25.213 "uuid": "ee1c2a14-4225-11ef-aa83-81fbc7dfef58", 00:14:25.213 "is_configured": true, 00:14:25.213 "data_offset": 0, 00:14:25.213 "data_size": 65536 00:14:25.213 }, 00:14:25.213 { 00:14:25.213 "name": null, 00:14:25.213 "uuid": "ebcb80fc-4225-11ef-aa83-81fbc7dfef58", 00:14:25.213 "is_configured": false, 00:14:25.213 "data_offset": 0, 00:14:25.213 "data_size": 65536 00:14:25.213 }, 00:14:25.213 { 00:14:25.213 "name": null, 00:14:25.213 "uuid": "ec3bcf61-4225-11ef-aa83-81fbc7dfef58", 00:14:25.213 "is_configured": false, 00:14:25.213 "data_offset": 0, 00:14:25.213 "data_size": 65536 00:14:25.213 }, 00:14:25.213 { 00:14:25.213 "name": "BaseBdev4", 00:14:25.213 "uuid": "ecadf2b5-4225-11ef-aa83-81fbc7dfef58", 00:14:25.213 "is_configured": true, 00:14:25.213 "data_offset": 0, 00:14:25.213 "data_size": 65536 00:14:25.213 } 00:14:25.213 ] 00:14:25.213 }' 00:14:25.213 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:25.213 21:13:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:25.471 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.471 21:13:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:25.729 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:25.729 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:25.987 [2024-07-14 21:13:37.330141] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:25.987 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:25.987 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:25.987 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:25.987 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:25.987 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:25.987 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:25.987 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:25.987 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:25.987 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:25.987 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:25.987 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:25.987 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:26.246 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:26.246 "name": "Existed_Raid", 00:14:26.246 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:26.246 "strip_size_kb": 64, 00:14:26.246 "state": "configuring", 00:14:26.246 "raid_level": "concat", 00:14:26.246 "superblock": false, 00:14:26.246 "num_base_bdevs": 4, 00:14:26.246 "num_base_bdevs_discovered": 3, 00:14:26.246 "num_base_bdevs_operational": 4, 00:14:26.246 "base_bdevs_list": [ 00:14:26.246 { 00:14:26.246 "name": "BaseBdev1", 00:14:26.246 "uuid": "ee1c2a14-4225-11ef-aa83-81fbc7dfef58", 00:14:26.246 "is_configured": true, 00:14:26.246 "data_offset": 0, 00:14:26.246 "data_size": 65536 00:14:26.246 }, 00:14:26.246 { 00:14:26.246 "name": null, 00:14:26.246 "uuid": "ebcb80fc-4225-11ef-aa83-81fbc7dfef58", 00:14:26.246 "is_configured": false, 00:14:26.246 "data_offset": 0, 00:14:26.246 "data_size": 65536 00:14:26.246 }, 00:14:26.246 { 00:14:26.246 "name": "BaseBdev3", 00:14:26.246 "uuid": "ec3bcf61-4225-11ef-aa83-81fbc7dfef58", 00:14:26.246 "is_configured": true, 00:14:26.246 "data_offset": 0, 00:14:26.246 "data_size": 65536 00:14:26.246 }, 00:14:26.246 { 00:14:26.246 "name": "BaseBdev4", 00:14:26.246 "uuid": "ecadf2b5-4225-11ef-aa83-81fbc7dfef58", 00:14:26.246 "is_configured": true, 00:14:26.246 "data_offset": 0, 00:14:26.246 "data_size": 65536 00:14:26.246 } 00:14:26.246 ] 00:14:26.246 }' 00:14:26.246 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:26.246 21:13:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:26.505 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:26.505 21:13:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:26.763 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:26.763 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:27.022 [2024-07-14 21:13:38.338195] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:27.022 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:27.022 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:27.022 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:27.022 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:27.022 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:27.022 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:27.022 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:27.022 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:27.022 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:27.022 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:27.022 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.022 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:27.281 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:27.281 "name": "Existed_Raid", 00:14:27.281 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:27.281 "strip_size_kb": 64, 00:14:27.281 "state": "configuring", 00:14:27.281 "raid_level": "concat", 00:14:27.281 "superblock": false, 00:14:27.281 "num_base_bdevs": 4, 00:14:27.281 "num_base_bdevs_discovered": 2, 00:14:27.281 "num_base_bdevs_operational": 4, 00:14:27.281 "base_bdevs_list": [ 00:14:27.281 { 00:14:27.281 "name": null, 00:14:27.281 "uuid": "ee1c2a14-4225-11ef-aa83-81fbc7dfef58", 00:14:27.281 "is_configured": false, 00:14:27.281 "data_offset": 0, 00:14:27.281 "data_size": 65536 00:14:27.281 }, 00:14:27.281 { 00:14:27.281 "name": null, 00:14:27.281 "uuid": "ebcb80fc-4225-11ef-aa83-81fbc7dfef58", 00:14:27.281 "is_configured": false, 00:14:27.281 "data_offset": 0, 00:14:27.281 "data_size": 65536 00:14:27.281 }, 00:14:27.281 { 00:14:27.281 "name": "BaseBdev3", 00:14:27.281 "uuid": "ec3bcf61-4225-11ef-aa83-81fbc7dfef58", 00:14:27.281 "is_configured": true, 00:14:27.281 "data_offset": 0, 00:14:27.281 "data_size": 65536 00:14:27.281 }, 00:14:27.281 { 00:14:27.281 "name": "BaseBdev4", 00:14:27.281 "uuid": "ecadf2b5-4225-11ef-aa83-81fbc7dfef58", 00:14:27.281 "is_configured": true, 00:14:27.281 "data_offset": 0, 00:14:27.281 "data_size": 65536 00:14:27.281 } 00:14:27.281 ] 00:14:27.281 }' 00:14:27.281 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:27.281 21:13:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:27.540 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:27.540 21:13:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:27.798 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:27.798 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:28.057 [2024-07-14 21:13:39.355503] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:28.057 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:28.057 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:28.057 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:28.057 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:28.057 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:28.057 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:28.057 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:28.057 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:28.057 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:28.057 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:28.057 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.057 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:28.315 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:28.315 "name": "Existed_Raid", 00:14:28.315 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:28.315 "strip_size_kb": 64, 00:14:28.315 "state": "configuring", 00:14:28.315 "raid_level": "concat", 00:14:28.315 "superblock": false, 00:14:28.315 "num_base_bdevs": 4, 00:14:28.315 "num_base_bdevs_discovered": 3, 00:14:28.315 "num_base_bdevs_operational": 4, 00:14:28.315 "base_bdevs_list": [ 00:14:28.315 { 00:14:28.315 "name": null, 00:14:28.315 "uuid": "ee1c2a14-4225-11ef-aa83-81fbc7dfef58", 00:14:28.315 "is_configured": false, 00:14:28.315 "data_offset": 0, 00:14:28.315 "data_size": 65536 00:14:28.315 }, 00:14:28.315 { 00:14:28.315 "name": "BaseBdev2", 00:14:28.315 "uuid": "ebcb80fc-4225-11ef-aa83-81fbc7dfef58", 00:14:28.315 "is_configured": true, 00:14:28.315 "data_offset": 0, 00:14:28.315 "data_size": 65536 00:14:28.315 }, 00:14:28.315 { 00:14:28.315 "name": "BaseBdev3", 00:14:28.315 "uuid": "ec3bcf61-4225-11ef-aa83-81fbc7dfef58", 00:14:28.315 "is_configured": true, 00:14:28.315 "data_offset": 0, 00:14:28.315 "data_size": 65536 00:14:28.315 }, 00:14:28.315 { 00:14:28.315 "name": "BaseBdev4", 00:14:28.315 "uuid": "ecadf2b5-4225-11ef-aa83-81fbc7dfef58", 00:14:28.315 "is_configured": true, 00:14:28.315 "data_offset": 0, 00:14:28.315 "data_size": 65536 00:14:28.315 } 00:14:28.315 ] 00:14:28.315 }' 00:14:28.315 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:28.315 21:13:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:28.574 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.574 21:13:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:28.833 21:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:28.833 21:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:28.833 21:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:29.091 21:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u ee1c2a14-4225-11ef-aa83-81fbc7dfef58 00:14:29.349 [2024-07-14 21:13:40.639660] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:29.349 [2024-07-14 21:13:40.639690] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xa72ed434f00 00:14:29.349 [2024-07-14 21:13:40.639711] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 262144, blocklen 512 00:14:29.349 [2024-07-14 21:13:40.639735] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xa72ed497e20 00:14:29.349 [2024-07-14 21:13:40.639820] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xa72ed434f00 00:14:29.349 [2024-07-14 21:13:40.639825] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0xa72ed434f00 00:14:29.349 [2024-07-14 21:13:40.639861] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:29.349 NewBaseBdev 00:14:29.349 21:13:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:29.349 21:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:14:29.349 21:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:29.349 21:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:14:29.349 21:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:29.349 21:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:29.349 21:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:29.607 21:13:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:29.607 [ 00:14:29.607 { 00:14:29.607 "name": "NewBaseBdev", 00:14:29.607 "aliases": [ 00:14:29.607 "ee1c2a14-4225-11ef-aa83-81fbc7dfef58" 00:14:29.607 ], 00:14:29.607 "product_name": "Malloc disk", 00:14:29.607 "block_size": 512, 00:14:29.607 "num_blocks": 65536, 00:14:29.607 "uuid": "ee1c2a14-4225-11ef-aa83-81fbc7dfef58", 00:14:29.607 "assigned_rate_limits": { 00:14:29.607 "rw_ios_per_sec": 0, 00:14:29.607 "rw_mbytes_per_sec": 0, 00:14:29.607 "r_mbytes_per_sec": 0, 00:14:29.607 "w_mbytes_per_sec": 0 00:14:29.607 }, 00:14:29.607 "claimed": true, 00:14:29.607 "claim_type": "exclusive_write", 00:14:29.607 "zoned": false, 00:14:29.607 "supported_io_types": { 00:14:29.607 "read": true, 00:14:29.607 "write": true, 00:14:29.607 "unmap": true, 00:14:29.607 "flush": true, 00:14:29.607 "reset": true, 00:14:29.607 "nvme_admin": false, 00:14:29.607 "nvme_io": false, 00:14:29.607 "nvme_io_md": false, 00:14:29.607 "write_zeroes": true, 00:14:29.607 "zcopy": true, 00:14:29.607 "get_zone_info": false, 00:14:29.607 "zone_management": false, 00:14:29.607 "zone_append": false, 00:14:29.607 "compare": false, 00:14:29.607 "compare_and_write": false, 00:14:29.607 "abort": true, 00:14:29.607 "seek_hole": false, 00:14:29.607 "seek_data": false, 00:14:29.607 "copy": true, 00:14:29.607 "nvme_iov_md": false 00:14:29.607 }, 00:14:29.607 "memory_domains": [ 00:14:29.607 { 00:14:29.607 "dma_device_id": "system", 00:14:29.607 "dma_device_type": 1 00:14:29.607 }, 00:14:29.607 { 00:14:29.607 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:29.607 "dma_device_type": 2 00:14:29.607 } 00:14:29.607 ], 00:14:29.607 "driver_specific": {} 00:14:29.607 } 00:14:29.607 ] 00:14:29.607 21:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:14:29.608 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:29.608 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:29.608 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:29.608 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:29.608 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:29.608 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:29.608 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:29.608 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:29.608 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:29.608 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:29.608 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:29.608 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:29.866 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:29.866 "name": "Existed_Raid", 00:14:29.866 "uuid": "f199f6fb-4225-11ef-aa83-81fbc7dfef58", 00:14:29.866 "strip_size_kb": 64, 00:14:29.866 "state": "online", 00:14:29.866 "raid_level": "concat", 00:14:29.866 "superblock": false, 00:14:29.866 "num_base_bdevs": 4, 00:14:29.866 "num_base_bdevs_discovered": 4, 00:14:29.866 "num_base_bdevs_operational": 4, 00:14:29.866 "base_bdevs_list": [ 00:14:29.866 { 00:14:29.866 "name": "NewBaseBdev", 00:14:29.866 "uuid": "ee1c2a14-4225-11ef-aa83-81fbc7dfef58", 00:14:29.866 "is_configured": true, 00:14:29.866 "data_offset": 0, 00:14:29.866 "data_size": 65536 00:14:29.866 }, 00:14:29.866 { 00:14:29.866 "name": "BaseBdev2", 00:14:29.866 "uuid": "ebcb80fc-4225-11ef-aa83-81fbc7dfef58", 00:14:29.866 "is_configured": true, 00:14:29.866 "data_offset": 0, 00:14:29.866 "data_size": 65536 00:14:29.866 }, 00:14:29.866 { 00:14:29.866 "name": "BaseBdev3", 00:14:29.866 "uuid": "ec3bcf61-4225-11ef-aa83-81fbc7dfef58", 00:14:29.866 "is_configured": true, 00:14:29.866 "data_offset": 0, 00:14:29.866 "data_size": 65536 00:14:29.866 }, 00:14:29.866 { 00:14:29.866 "name": "BaseBdev4", 00:14:29.866 "uuid": "ecadf2b5-4225-11ef-aa83-81fbc7dfef58", 00:14:29.866 "is_configured": true, 00:14:29.866 "data_offset": 0, 00:14:29.866 "data_size": 65536 00:14:29.866 } 00:14:29.866 ] 00:14:29.866 }' 00:14:29.866 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:29.866 21:13:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:30.125 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:30.125 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:30.125 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:30.125 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:30.125 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:30.125 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:14:30.125 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:30.125 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:30.383 [2024-07-14 21:13:41.879579] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:30.383 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:30.383 "name": "Existed_Raid", 00:14:30.383 "aliases": [ 00:14:30.383 "f199f6fb-4225-11ef-aa83-81fbc7dfef58" 00:14:30.383 ], 00:14:30.383 "product_name": "Raid Volume", 00:14:30.384 "block_size": 512, 00:14:30.384 "num_blocks": 262144, 00:14:30.384 "uuid": "f199f6fb-4225-11ef-aa83-81fbc7dfef58", 00:14:30.384 "assigned_rate_limits": { 00:14:30.384 "rw_ios_per_sec": 0, 00:14:30.384 "rw_mbytes_per_sec": 0, 00:14:30.384 "r_mbytes_per_sec": 0, 00:14:30.384 "w_mbytes_per_sec": 0 00:14:30.384 }, 00:14:30.384 "claimed": false, 00:14:30.384 "zoned": false, 00:14:30.384 "supported_io_types": { 00:14:30.384 "read": true, 00:14:30.384 "write": true, 00:14:30.384 "unmap": true, 00:14:30.384 "flush": true, 00:14:30.384 "reset": true, 00:14:30.384 "nvme_admin": false, 00:14:30.384 "nvme_io": false, 00:14:30.384 "nvme_io_md": false, 00:14:30.384 "write_zeroes": true, 00:14:30.384 "zcopy": false, 00:14:30.384 "get_zone_info": false, 00:14:30.384 "zone_management": false, 00:14:30.384 "zone_append": false, 00:14:30.384 "compare": false, 00:14:30.384 "compare_and_write": false, 00:14:30.384 "abort": false, 00:14:30.384 "seek_hole": false, 00:14:30.384 "seek_data": false, 00:14:30.384 "copy": false, 00:14:30.384 "nvme_iov_md": false 00:14:30.384 }, 00:14:30.384 "memory_domains": [ 00:14:30.384 { 00:14:30.384 "dma_device_id": "system", 00:14:30.384 "dma_device_type": 1 00:14:30.384 }, 00:14:30.384 { 00:14:30.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.384 "dma_device_type": 2 00:14:30.384 }, 00:14:30.384 { 00:14:30.384 "dma_device_id": "system", 00:14:30.384 "dma_device_type": 1 00:14:30.384 }, 00:14:30.384 { 00:14:30.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.384 "dma_device_type": 2 00:14:30.384 }, 00:14:30.384 { 00:14:30.384 "dma_device_id": "system", 00:14:30.384 "dma_device_type": 1 00:14:30.384 }, 00:14:30.384 { 00:14:30.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.384 "dma_device_type": 2 00:14:30.384 }, 00:14:30.384 { 00:14:30.384 "dma_device_id": "system", 00:14:30.384 "dma_device_type": 1 00:14:30.384 }, 00:14:30.384 { 00:14:30.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.384 "dma_device_type": 2 00:14:30.384 } 00:14:30.384 ], 00:14:30.384 "driver_specific": { 00:14:30.384 "raid": { 00:14:30.384 "uuid": "f199f6fb-4225-11ef-aa83-81fbc7dfef58", 00:14:30.384 "strip_size_kb": 64, 00:14:30.384 "state": "online", 00:14:30.384 "raid_level": "concat", 00:14:30.384 "superblock": false, 00:14:30.384 "num_base_bdevs": 4, 00:14:30.384 "num_base_bdevs_discovered": 4, 00:14:30.384 "num_base_bdevs_operational": 4, 00:14:30.384 "base_bdevs_list": [ 00:14:30.384 { 00:14:30.384 "name": "NewBaseBdev", 00:14:30.384 "uuid": "ee1c2a14-4225-11ef-aa83-81fbc7dfef58", 00:14:30.384 "is_configured": true, 00:14:30.384 "data_offset": 0, 00:14:30.384 "data_size": 65536 00:14:30.384 }, 00:14:30.384 { 00:14:30.384 "name": "BaseBdev2", 00:14:30.384 "uuid": "ebcb80fc-4225-11ef-aa83-81fbc7dfef58", 00:14:30.384 "is_configured": true, 00:14:30.384 "data_offset": 0, 00:14:30.384 "data_size": 65536 00:14:30.384 }, 00:14:30.384 { 00:14:30.384 "name": "BaseBdev3", 00:14:30.384 "uuid": "ec3bcf61-4225-11ef-aa83-81fbc7dfef58", 00:14:30.384 "is_configured": true, 00:14:30.384 "data_offset": 0, 00:14:30.384 "data_size": 65536 00:14:30.384 }, 00:14:30.384 { 00:14:30.384 "name": "BaseBdev4", 00:14:30.384 "uuid": "ecadf2b5-4225-11ef-aa83-81fbc7dfef58", 00:14:30.384 "is_configured": true, 00:14:30.384 "data_offset": 0, 00:14:30.384 "data_size": 65536 00:14:30.384 } 00:14:30.384 ] 00:14:30.384 } 00:14:30.384 } 00:14:30.384 }' 00:14:30.384 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:30.384 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:30.384 BaseBdev2 00:14:30.384 BaseBdev3 00:14:30.384 BaseBdev4' 00:14:30.384 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:30.384 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:30.384 21:13:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:30.642 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:30.642 "name": "NewBaseBdev", 00:14:30.642 "aliases": [ 00:14:30.642 "ee1c2a14-4225-11ef-aa83-81fbc7dfef58" 00:14:30.642 ], 00:14:30.642 "product_name": "Malloc disk", 00:14:30.642 "block_size": 512, 00:14:30.642 "num_blocks": 65536, 00:14:30.642 "uuid": "ee1c2a14-4225-11ef-aa83-81fbc7dfef58", 00:14:30.642 "assigned_rate_limits": { 00:14:30.642 "rw_ios_per_sec": 0, 00:14:30.642 "rw_mbytes_per_sec": 0, 00:14:30.642 "r_mbytes_per_sec": 0, 00:14:30.642 "w_mbytes_per_sec": 0 00:14:30.642 }, 00:14:30.642 "claimed": true, 00:14:30.642 "claim_type": "exclusive_write", 00:14:30.642 "zoned": false, 00:14:30.642 "supported_io_types": { 00:14:30.642 "read": true, 00:14:30.642 "write": true, 00:14:30.642 "unmap": true, 00:14:30.642 "flush": true, 00:14:30.642 "reset": true, 00:14:30.642 "nvme_admin": false, 00:14:30.642 "nvme_io": false, 00:14:30.642 "nvme_io_md": false, 00:14:30.642 "write_zeroes": true, 00:14:30.642 "zcopy": true, 00:14:30.642 "get_zone_info": false, 00:14:30.642 "zone_management": false, 00:14:30.642 "zone_append": false, 00:14:30.642 "compare": false, 00:14:30.642 "compare_and_write": false, 00:14:30.643 "abort": true, 00:14:30.643 "seek_hole": false, 00:14:30.643 "seek_data": false, 00:14:30.643 "copy": true, 00:14:30.643 "nvme_iov_md": false 00:14:30.643 }, 00:14:30.643 "memory_domains": [ 00:14:30.643 { 00:14:30.643 "dma_device_id": "system", 00:14:30.643 "dma_device_type": 1 00:14:30.643 }, 00:14:30.643 { 00:14:30.643 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:30.643 "dma_device_type": 2 00:14:30.643 } 00:14:30.643 ], 00:14:30.643 "driver_specific": {} 00:14:30.643 }' 00:14:30.643 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.643 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:30.643 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:30.643 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.643 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:30.643 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:30.643 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.643 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:30.643 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:30.643 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.643 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:30.643 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:30.643 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:30.643 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:30.643 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:31.209 "name": "BaseBdev2", 00:14:31.209 "aliases": [ 00:14:31.209 "ebcb80fc-4225-11ef-aa83-81fbc7dfef58" 00:14:31.209 ], 00:14:31.209 "product_name": "Malloc disk", 00:14:31.209 "block_size": 512, 00:14:31.209 "num_blocks": 65536, 00:14:31.209 "uuid": "ebcb80fc-4225-11ef-aa83-81fbc7dfef58", 00:14:31.209 "assigned_rate_limits": { 00:14:31.209 "rw_ios_per_sec": 0, 00:14:31.209 "rw_mbytes_per_sec": 0, 00:14:31.209 "r_mbytes_per_sec": 0, 00:14:31.209 "w_mbytes_per_sec": 0 00:14:31.209 }, 00:14:31.209 "claimed": true, 00:14:31.209 "claim_type": "exclusive_write", 00:14:31.209 "zoned": false, 00:14:31.209 "supported_io_types": { 00:14:31.209 "read": true, 00:14:31.209 "write": true, 00:14:31.209 "unmap": true, 00:14:31.209 "flush": true, 00:14:31.209 "reset": true, 00:14:31.209 "nvme_admin": false, 00:14:31.209 "nvme_io": false, 00:14:31.209 "nvme_io_md": false, 00:14:31.209 "write_zeroes": true, 00:14:31.209 "zcopy": true, 00:14:31.209 "get_zone_info": false, 00:14:31.209 "zone_management": false, 00:14:31.209 "zone_append": false, 00:14:31.209 "compare": false, 00:14:31.209 "compare_and_write": false, 00:14:31.209 "abort": true, 00:14:31.209 "seek_hole": false, 00:14:31.209 "seek_data": false, 00:14:31.209 "copy": true, 00:14:31.209 "nvme_iov_md": false 00:14:31.209 }, 00:14:31.209 "memory_domains": [ 00:14:31.209 { 00:14:31.209 "dma_device_id": "system", 00:14:31.209 "dma_device_type": 1 00:14:31.209 }, 00:14:31.209 { 00:14:31.209 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.209 "dma_device_type": 2 00:14:31.209 } 00:14:31.209 ], 00:14:31.209 "driver_specific": {} 00:14:31.209 }' 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:31.209 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:31.468 "name": "BaseBdev3", 00:14:31.468 "aliases": [ 00:14:31.468 "ec3bcf61-4225-11ef-aa83-81fbc7dfef58" 00:14:31.468 ], 00:14:31.468 "product_name": "Malloc disk", 00:14:31.468 "block_size": 512, 00:14:31.468 "num_blocks": 65536, 00:14:31.468 "uuid": "ec3bcf61-4225-11ef-aa83-81fbc7dfef58", 00:14:31.468 "assigned_rate_limits": { 00:14:31.468 "rw_ios_per_sec": 0, 00:14:31.468 "rw_mbytes_per_sec": 0, 00:14:31.468 "r_mbytes_per_sec": 0, 00:14:31.468 "w_mbytes_per_sec": 0 00:14:31.468 }, 00:14:31.468 "claimed": true, 00:14:31.468 "claim_type": "exclusive_write", 00:14:31.468 "zoned": false, 00:14:31.468 "supported_io_types": { 00:14:31.468 "read": true, 00:14:31.468 "write": true, 00:14:31.468 "unmap": true, 00:14:31.468 "flush": true, 00:14:31.468 "reset": true, 00:14:31.468 "nvme_admin": false, 00:14:31.468 "nvme_io": false, 00:14:31.468 "nvme_io_md": false, 00:14:31.468 "write_zeroes": true, 00:14:31.468 "zcopy": true, 00:14:31.468 "get_zone_info": false, 00:14:31.468 "zone_management": false, 00:14:31.468 "zone_append": false, 00:14:31.468 "compare": false, 00:14:31.468 "compare_and_write": false, 00:14:31.468 "abort": true, 00:14:31.468 "seek_hole": false, 00:14:31.468 "seek_data": false, 00:14:31.468 "copy": true, 00:14:31.468 "nvme_iov_md": false 00:14:31.468 }, 00:14:31.468 "memory_domains": [ 00:14:31.468 { 00:14:31.468 "dma_device_id": "system", 00:14:31.468 "dma_device_type": 1 00:14:31.468 }, 00:14:31.468 { 00:14:31.468 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.468 "dma_device_type": 2 00:14:31.468 } 00:14:31.468 ], 00:14:31.468 "driver_specific": {} 00:14:31.468 }' 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:31.468 21:13:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:31.727 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:31.727 "name": "BaseBdev4", 00:14:31.727 "aliases": [ 00:14:31.727 "ecadf2b5-4225-11ef-aa83-81fbc7dfef58" 00:14:31.727 ], 00:14:31.727 "product_name": "Malloc disk", 00:14:31.727 "block_size": 512, 00:14:31.727 "num_blocks": 65536, 00:14:31.727 "uuid": "ecadf2b5-4225-11ef-aa83-81fbc7dfef58", 00:14:31.727 "assigned_rate_limits": { 00:14:31.727 "rw_ios_per_sec": 0, 00:14:31.727 "rw_mbytes_per_sec": 0, 00:14:31.727 "r_mbytes_per_sec": 0, 00:14:31.727 "w_mbytes_per_sec": 0 00:14:31.727 }, 00:14:31.727 "claimed": true, 00:14:31.727 "claim_type": "exclusive_write", 00:14:31.727 "zoned": false, 00:14:31.727 "supported_io_types": { 00:14:31.727 "read": true, 00:14:31.727 "write": true, 00:14:31.727 "unmap": true, 00:14:31.727 "flush": true, 00:14:31.727 "reset": true, 00:14:31.727 "nvme_admin": false, 00:14:31.727 "nvme_io": false, 00:14:31.727 "nvme_io_md": false, 00:14:31.727 "write_zeroes": true, 00:14:31.727 "zcopy": true, 00:14:31.727 "get_zone_info": false, 00:14:31.727 "zone_management": false, 00:14:31.727 "zone_append": false, 00:14:31.727 "compare": false, 00:14:31.727 "compare_and_write": false, 00:14:31.727 "abort": true, 00:14:31.727 "seek_hole": false, 00:14:31.727 "seek_data": false, 00:14:31.727 "copy": true, 00:14:31.727 "nvme_iov_md": false 00:14:31.727 }, 00:14:31.727 "memory_domains": [ 00:14:31.727 { 00:14:31.727 "dma_device_id": "system", 00:14:31.727 "dma_device_type": 1 00:14:31.727 }, 00:14:31.727 { 00:14:31.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:31.727 "dma_device_type": 2 00:14:31.727 } 00:14:31.727 ], 00:14:31.727 "driver_specific": {} 00:14:31.727 }' 00:14:31.728 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.728 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:31.728 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:31.728 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.728 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:31.728 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:31.728 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.728 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:31.728 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:31.728 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.728 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:31.728 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:31.728 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:31.985 [2024-07-14 21:13:43.391580] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:31.985 [2024-07-14 21:13:43.391602] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:31.985 [2024-07-14 21:13:43.391634] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:31.985 [2024-07-14 21:13:43.391651] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:31.985 [2024-07-14 21:13:43.391655] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xa72ed434f00 name Existed_Raid, state offline 00:14:31.985 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 60586 00:14:31.985 21:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 60586 ']' 00:14:31.985 21:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 60586 00:14:31.985 21:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:14:31.985 21:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:31.985 21:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 60586 00:14:31.985 21:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:14:31.985 21:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:14:31.985 21:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:14:31.985 killing process with pid 60586 00:14:31.985 21:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 60586' 00:14:31.985 21:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 60586 00:14:31.985 [2024-07-14 21:13:43.421148] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:31.985 21:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 60586 00:14:31.985 [2024-07-14 21:13:43.457276] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:32.243 21:13:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:14:32.243 00:14:32.243 real 0m25.851s 00:14:32.243 user 0m46.940s 00:14:32.243 sys 0m3.860s 00:14:32.243 21:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:32.243 ************************************ 00:14:32.243 END TEST raid_state_function_test 00:14:32.243 ************************************ 00:14:32.243 21:13:43 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:14:32.244 21:13:43 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:32.244 21:13:43 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test concat 4 true 00:14:32.244 21:13:43 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:14:32.244 21:13:43 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:32.244 21:13:43 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:32.244 ************************************ 00:14:32.244 START TEST raid_state_function_test_sb 00:14:32.244 ************************************ 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test concat 4 true 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=concat 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' concat '!=' raid1 ']' 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@231 -- # strip_size=64 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@232 -- # strip_size_create_arg='-z 64' 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=61397 00:14:32.244 Process raid pid: 61397 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 61397' 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 61397 /var/tmp/spdk-raid.sock 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 61397 ']' 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:32.244 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:32.244 21:13:43 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:32.244 [2024-07-14 21:13:43.763061] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:32.244 [2024-07-14 21:13:43.763290] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:32.845 EAL: TSC is not safe to use in SMP mode 00:14:32.845 EAL: TSC is not invariant 00:14:32.845 [2024-07-14 21:13:44.308828] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:33.103 [2024-07-14 21:13:44.411881] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:33.103 [2024-07-14 21:13:44.414398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:33.103 [2024-07-14 21:13:44.415365] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.103 [2024-07-14 21:13:44.415382] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:33.361 21:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:33.361 21:13:44 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:14:33.361 21:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:33.620 [2024-07-14 21:13:44.921377] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:33.620 [2024-07-14 21:13:44.921431] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:33.620 [2024-07-14 21:13:44.921436] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:33.620 [2024-07-14 21:13:44.921459] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:33.620 [2024-07-14 21:13:44.921462] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:33.620 [2024-07-14 21:13:44.921468] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:33.620 [2024-07-14 21:13:44.921471] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:33.620 [2024-07-14 21:13:44.921477] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:33.620 21:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:33.620 21:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:33.620 21:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:33.620 21:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:33.620 21:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:33.620 21:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:33.620 21:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:33.620 21:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:33.620 21:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:33.620 21:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:33.620 21:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:33.620 21:13:44 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:33.620 21:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:33.620 "name": "Existed_Raid", 00:14:33.620 "uuid": "f4274b29-4225-11ef-aa83-81fbc7dfef58", 00:14:33.620 "strip_size_kb": 64, 00:14:33.620 "state": "configuring", 00:14:33.620 "raid_level": "concat", 00:14:33.620 "superblock": true, 00:14:33.620 "num_base_bdevs": 4, 00:14:33.620 "num_base_bdevs_discovered": 0, 00:14:33.620 "num_base_bdevs_operational": 4, 00:14:33.620 "base_bdevs_list": [ 00:14:33.621 { 00:14:33.621 "name": "BaseBdev1", 00:14:33.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.621 "is_configured": false, 00:14:33.621 "data_offset": 0, 00:14:33.621 "data_size": 0 00:14:33.621 }, 00:14:33.621 { 00:14:33.621 "name": "BaseBdev2", 00:14:33.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.621 "is_configured": false, 00:14:33.621 "data_offset": 0, 00:14:33.621 "data_size": 0 00:14:33.621 }, 00:14:33.621 { 00:14:33.621 "name": "BaseBdev3", 00:14:33.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.621 "is_configured": false, 00:14:33.621 "data_offset": 0, 00:14:33.621 "data_size": 0 00:14:33.621 }, 00:14:33.621 { 00:14:33.621 "name": "BaseBdev4", 00:14:33.621 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:33.621 "is_configured": false, 00:14:33.621 "data_offset": 0, 00:14:33.621 "data_size": 0 00:14:33.621 } 00:14:33.621 ] 00:14:33.621 }' 00:14:33.621 21:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:33.621 21:13:45 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:33.879 21:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:34.136 [2024-07-14 21:13:45.669406] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:34.136 [2024-07-14 21:13:45.669426] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x37f24e234500 name Existed_Raid, state configuring 00:14:34.394 21:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:34.394 [2024-07-14 21:13:45.869434] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:34.394 [2024-07-14 21:13:45.869484] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:34.394 [2024-07-14 21:13:45.869488] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:34.394 [2024-07-14 21:13:45.869511] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:34.394 [2024-07-14 21:13:45.869514] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:34.394 [2024-07-14 21:13:45.869520] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:34.394 [2024-07-14 21:13:45.869523] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:34.394 [2024-07-14 21:13:45.869530] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:34.394 21:13:45 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:34.652 [2024-07-14 21:13:46.078296] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:34.652 BaseBdev1 00:14:34.652 21:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:14:34.652 21:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:34.652 21:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:34.652 21:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:34.652 21:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:34.652 21:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:34.652 21:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:34.910 21:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:35.168 [ 00:14:35.168 { 00:14:35.168 "name": "BaseBdev1", 00:14:35.168 "aliases": [ 00:14:35.168 "f4d7b1cd-4225-11ef-aa83-81fbc7dfef58" 00:14:35.168 ], 00:14:35.168 "product_name": "Malloc disk", 00:14:35.168 "block_size": 512, 00:14:35.168 "num_blocks": 65536, 00:14:35.168 "uuid": "f4d7b1cd-4225-11ef-aa83-81fbc7dfef58", 00:14:35.168 "assigned_rate_limits": { 00:14:35.168 "rw_ios_per_sec": 0, 00:14:35.168 "rw_mbytes_per_sec": 0, 00:14:35.168 "r_mbytes_per_sec": 0, 00:14:35.168 "w_mbytes_per_sec": 0 00:14:35.168 }, 00:14:35.168 "claimed": true, 00:14:35.168 "claim_type": "exclusive_write", 00:14:35.168 "zoned": false, 00:14:35.168 "supported_io_types": { 00:14:35.168 "read": true, 00:14:35.168 "write": true, 00:14:35.168 "unmap": true, 00:14:35.168 "flush": true, 00:14:35.168 "reset": true, 00:14:35.168 "nvme_admin": false, 00:14:35.168 "nvme_io": false, 00:14:35.168 "nvme_io_md": false, 00:14:35.168 "write_zeroes": true, 00:14:35.168 "zcopy": true, 00:14:35.168 "get_zone_info": false, 00:14:35.168 "zone_management": false, 00:14:35.168 "zone_append": false, 00:14:35.168 "compare": false, 00:14:35.168 "compare_and_write": false, 00:14:35.168 "abort": true, 00:14:35.168 "seek_hole": false, 00:14:35.168 "seek_data": false, 00:14:35.168 "copy": true, 00:14:35.168 "nvme_iov_md": false 00:14:35.168 }, 00:14:35.168 "memory_domains": [ 00:14:35.168 { 00:14:35.168 "dma_device_id": "system", 00:14:35.168 "dma_device_type": 1 00:14:35.168 }, 00:14:35.168 { 00:14:35.168 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:35.168 "dma_device_type": 2 00:14:35.168 } 00:14:35.168 ], 00:14:35.168 "driver_specific": {} 00:14:35.168 } 00:14:35.168 ] 00:14:35.168 21:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:35.168 21:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:35.168 21:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:35.168 21:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:35.168 21:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:35.168 21:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:35.168 21:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:35.168 21:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:35.168 21:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:35.169 21:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:35.169 21:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:35.169 21:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:35.169 21:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:35.427 21:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:35.427 "name": "Existed_Raid", 00:14:35.427 "uuid": "f4b7f49e-4225-11ef-aa83-81fbc7dfef58", 00:14:35.427 "strip_size_kb": 64, 00:14:35.427 "state": "configuring", 00:14:35.427 "raid_level": "concat", 00:14:35.427 "superblock": true, 00:14:35.427 "num_base_bdevs": 4, 00:14:35.427 "num_base_bdevs_discovered": 1, 00:14:35.427 "num_base_bdevs_operational": 4, 00:14:35.427 "base_bdevs_list": [ 00:14:35.427 { 00:14:35.427 "name": "BaseBdev1", 00:14:35.427 "uuid": "f4d7b1cd-4225-11ef-aa83-81fbc7dfef58", 00:14:35.427 "is_configured": true, 00:14:35.427 "data_offset": 2048, 00:14:35.427 "data_size": 63488 00:14:35.427 }, 00:14:35.427 { 00:14:35.427 "name": "BaseBdev2", 00:14:35.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.427 "is_configured": false, 00:14:35.427 "data_offset": 0, 00:14:35.427 "data_size": 0 00:14:35.427 }, 00:14:35.427 { 00:14:35.427 "name": "BaseBdev3", 00:14:35.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.427 "is_configured": false, 00:14:35.427 "data_offset": 0, 00:14:35.427 "data_size": 0 00:14:35.427 }, 00:14:35.427 { 00:14:35.427 "name": "BaseBdev4", 00:14:35.427 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:35.427 "is_configured": false, 00:14:35.427 "data_offset": 0, 00:14:35.427 "data_size": 0 00:14:35.427 } 00:14:35.427 ] 00:14:35.427 }' 00:14:35.427 21:13:46 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:35.427 21:13:46 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:35.686 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:35.944 [2024-07-14 21:13:47.361675] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:35.944 [2024-07-14 21:13:47.361719] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x37f24e234500 name Existed_Raid, state configuring 00:14:35.944 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:36.203 [2024-07-14 21:13:47.593703] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:36.203 [2024-07-14 21:13:47.594582] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:14:36.203 [2024-07-14 21:13:47.594619] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:14:36.203 [2024-07-14 21:13:47.594624] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:14:36.203 [2024-07-14 21:13:47.594631] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:14:36.203 [2024-07-14 21:13:47.594635] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:14:36.203 [2024-07-14 21:13:47.594642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:14:36.203 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:14:36.203 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:36.203 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:36.203 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:36.203 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:36.203 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:36.203 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:36.203 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:36.203 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:36.203 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:36.203 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:36.203 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:36.203 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:36.203 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:36.461 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:36.461 "name": "Existed_Raid", 00:14:36.461 "uuid": "f5bf0eb2-4225-11ef-aa83-81fbc7dfef58", 00:14:36.461 "strip_size_kb": 64, 00:14:36.461 "state": "configuring", 00:14:36.461 "raid_level": "concat", 00:14:36.461 "superblock": true, 00:14:36.461 "num_base_bdevs": 4, 00:14:36.461 "num_base_bdevs_discovered": 1, 00:14:36.461 "num_base_bdevs_operational": 4, 00:14:36.461 "base_bdevs_list": [ 00:14:36.461 { 00:14:36.461 "name": "BaseBdev1", 00:14:36.461 "uuid": "f4d7b1cd-4225-11ef-aa83-81fbc7dfef58", 00:14:36.461 "is_configured": true, 00:14:36.461 "data_offset": 2048, 00:14:36.461 "data_size": 63488 00:14:36.461 }, 00:14:36.461 { 00:14:36.461 "name": "BaseBdev2", 00:14:36.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.461 "is_configured": false, 00:14:36.461 "data_offset": 0, 00:14:36.461 "data_size": 0 00:14:36.461 }, 00:14:36.461 { 00:14:36.461 "name": "BaseBdev3", 00:14:36.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.461 "is_configured": false, 00:14:36.461 "data_offset": 0, 00:14:36.461 "data_size": 0 00:14:36.461 }, 00:14:36.461 { 00:14:36.461 "name": "BaseBdev4", 00:14:36.461 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:36.461 "is_configured": false, 00:14:36.461 "data_offset": 0, 00:14:36.461 "data_size": 0 00:14:36.461 } 00:14:36.461 ] 00:14:36.461 }' 00:14:36.461 21:13:47 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:36.461 21:13:47 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:36.720 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:36.978 [2024-07-14 21:13:48.349944] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:36.978 BaseBdev2 00:14:36.978 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:14:36.978 21:13:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:36.978 21:13:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:36.978 21:13:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:36.978 21:13:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:36.978 21:13:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:36.978 21:13:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:37.237 21:13:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:37.237 [ 00:14:37.237 { 00:14:37.237 "name": "BaseBdev2", 00:14:37.237 "aliases": [ 00:14:37.237 "f6326d0a-4225-11ef-aa83-81fbc7dfef58" 00:14:37.237 ], 00:14:37.237 "product_name": "Malloc disk", 00:14:37.237 "block_size": 512, 00:14:37.237 "num_blocks": 65536, 00:14:37.237 "uuid": "f6326d0a-4225-11ef-aa83-81fbc7dfef58", 00:14:37.237 "assigned_rate_limits": { 00:14:37.237 "rw_ios_per_sec": 0, 00:14:37.237 "rw_mbytes_per_sec": 0, 00:14:37.237 "r_mbytes_per_sec": 0, 00:14:37.237 "w_mbytes_per_sec": 0 00:14:37.237 }, 00:14:37.237 "claimed": true, 00:14:37.237 "claim_type": "exclusive_write", 00:14:37.237 "zoned": false, 00:14:37.237 "supported_io_types": { 00:14:37.237 "read": true, 00:14:37.237 "write": true, 00:14:37.237 "unmap": true, 00:14:37.237 "flush": true, 00:14:37.237 "reset": true, 00:14:37.237 "nvme_admin": false, 00:14:37.237 "nvme_io": false, 00:14:37.237 "nvme_io_md": false, 00:14:37.237 "write_zeroes": true, 00:14:37.237 "zcopy": true, 00:14:37.237 "get_zone_info": false, 00:14:37.237 "zone_management": false, 00:14:37.237 "zone_append": false, 00:14:37.237 "compare": false, 00:14:37.237 "compare_and_write": false, 00:14:37.237 "abort": true, 00:14:37.237 "seek_hole": false, 00:14:37.237 "seek_data": false, 00:14:37.237 "copy": true, 00:14:37.237 "nvme_iov_md": false 00:14:37.237 }, 00:14:37.237 "memory_domains": [ 00:14:37.237 { 00:14:37.237 "dma_device_id": "system", 00:14:37.237 "dma_device_type": 1 00:14:37.237 }, 00:14:37.237 { 00:14:37.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:37.237 "dma_device_type": 2 00:14:37.237 } 00:14:37.237 ], 00:14:37.237 "driver_specific": {} 00:14:37.237 } 00:14:37.237 ] 00:14:37.237 21:13:48 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:37.237 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:37.237 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:37.237 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:37.237 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:37.237 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:37.237 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:37.237 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:37.237 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:37.237 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:37.237 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:37.237 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:37.237 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:37.496 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:37.496 21:13:48 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:37.496 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:37.496 "name": "Existed_Raid", 00:14:37.496 "uuid": "f5bf0eb2-4225-11ef-aa83-81fbc7dfef58", 00:14:37.496 "strip_size_kb": 64, 00:14:37.496 "state": "configuring", 00:14:37.496 "raid_level": "concat", 00:14:37.496 "superblock": true, 00:14:37.496 "num_base_bdevs": 4, 00:14:37.496 "num_base_bdevs_discovered": 2, 00:14:37.496 "num_base_bdevs_operational": 4, 00:14:37.496 "base_bdevs_list": [ 00:14:37.496 { 00:14:37.496 "name": "BaseBdev1", 00:14:37.496 "uuid": "f4d7b1cd-4225-11ef-aa83-81fbc7dfef58", 00:14:37.496 "is_configured": true, 00:14:37.496 "data_offset": 2048, 00:14:37.496 "data_size": 63488 00:14:37.496 }, 00:14:37.496 { 00:14:37.496 "name": "BaseBdev2", 00:14:37.496 "uuid": "f6326d0a-4225-11ef-aa83-81fbc7dfef58", 00:14:37.496 "is_configured": true, 00:14:37.496 "data_offset": 2048, 00:14:37.496 "data_size": 63488 00:14:37.496 }, 00:14:37.496 { 00:14:37.496 "name": "BaseBdev3", 00:14:37.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.496 "is_configured": false, 00:14:37.496 "data_offset": 0, 00:14:37.496 "data_size": 0 00:14:37.496 }, 00:14:37.496 { 00:14:37.496 "name": "BaseBdev4", 00:14:37.496 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:37.496 "is_configured": false, 00:14:37.496 "data_offset": 0, 00:14:37.496 "data_size": 0 00:14:37.496 } 00:14:37.496 ] 00:14:37.496 }' 00:14:37.496 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:37.496 21:13:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:38.063 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:38.063 [2024-07-14 21:13:49.549958] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:38.063 BaseBdev3 00:14:38.063 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:14:38.063 21:13:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:38.063 21:13:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:38.063 21:13:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:38.063 21:13:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:38.063 21:13:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:38.063 21:13:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:38.322 21:13:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:38.580 [ 00:14:38.580 { 00:14:38.580 "name": "BaseBdev3", 00:14:38.580 "aliases": [ 00:14:38.580 "f6e98958-4225-11ef-aa83-81fbc7dfef58" 00:14:38.580 ], 00:14:38.580 "product_name": "Malloc disk", 00:14:38.580 "block_size": 512, 00:14:38.580 "num_blocks": 65536, 00:14:38.580 "uuid": "f6e98958-4225-11ef-aa83-81fbc7dfef58", 00:14:38.580 "assigned_rate_limits": { 00:14:38.580 "rw_ios_per_sec": 0, 00:14:38.580 "rw_mbytes_per_sec": 0, 00:14:38.580 "r_mbytes_per_sec": 0, 00:14:38.580 "w_mbytes_per_sec": 0 00:14:38.580 }, 00:14:38.580 "claimed": true, 00:14:38.580 "claim_type": "exclusive_write", 00:14:38.580 "zoned": false, 00:14:38.580 "supported_io_types": { 00:14:38.580 "read": true, 00:14:38.580 "write": true, 00:14:38.580 "unmap": true, 00:14:38.580 "flush": true, 00:14:38.580 "reset": true, 00:14:38.580 "nvme_admin": false, 00:14:38.580 "nvme_io": false, 00:14:38.580 "nvme_io_md": false, 00:14:38.580 "write_zeroes": true, 00:14:38.580 "zcopy": true, 00:14:38.580 "get_zone_info": false, 00:14:38.580 "zone_management": false, 00:14:38.580 "zone_append": false, 00:14:38.580 "compare": false, 00:14:38.580 "compare_and_write": false, 00:14:38.580 "abort": true, 00:14:38.580 "seek_hole": false, 00:14:38.580 "seek_data": false, 00:14:38.580 "copy": true, 00:14:38.580 "nvme_iov_md": false 00:14:38.580 }, 00:14:38.580 "memory_domains": [ 00:14:38.580 { 00:14:38.580 "dma_device_id": "system", 00:14:38.580 "dma_device_type": 1 00:14:38.580 }, 00:14:38.580 { 00:14:38.580 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:38.580 "dma_device_type": 2 00:14:38.580 } 00:14:38.580 ], 00:14:38.580 "driver_specific": {} 00:14:38.580 } 00:14:38.580 ] 00:14:38.580 21:13:49 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:38.580 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:38.580 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:38.580 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:38.580 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:38.580 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:38.580 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:38.580 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:38.580 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:38.580 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:38.580 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:38.580 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:38.580 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:38.580 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:38.580 21:13:49 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:38.839 21:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:38.839 "name": "Existed_Raid", 00:14:38.839 "uuid": "f5bf0eb2-4225-11ef-aa83-81fbc7dfef58", 00:14:38.839 "strip_size_kb": 64, 00:14:38.839 "state": "configuring", 00:14:38.839 "raid_level": "concat", 00:14:38.839 "superblock": true, 00:14:38.839 "num_base_bdevs": 4, 00:14:38.839 "num_base_bdevs_discovered": 3, 00:14:38.839 "num_base_bdevs_operational": 4, 00:14:38.839 "base_bdevs_list": [ 00:14:38.839 { 00:14:38.839 "name": "BaseBdev1", 00:14:38.839 "uuid": "f4d7b1cd-4225-11ef-aa83-81fbc7dfef58", 00:14:38.839 "is_configured": true, 00:14:38.839 "data_offset": 2048, 00:14:38.839 "data_size": 63488 00:14:38.839 }, 00:14:38.839 { 00:14:38.839 "name": "BaseBdev2", 00:14:38.839 "uuid": "f6326d0a-4225-11ef-aa83-81fbc7dfef58", 00:14:38.839 "is_configured": true, 00:14:38.839 "data_offset": 2048, 00:14:38.839 "data_size": 63488 00:14:38.839 }, 00:14:38.839 { 00:14:38.839 "name": "BaseBdev3", 00:14:38.839 "uuid": "f6e98958-4225-11ef-aa83-81fbc7dfef58", 00:14:38.839 "is_configured": true, 00:14:38.839 "data_offset": 2048, 00:14:38.839 "data_size": 63488 00:14:38.839 }, 00:14:38.839 { 00:14:38.839 "name": "BaseBdev4", 00:14:38.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:38.839 "is_configured": false, 00:14:38.839 "data_offset": 0, 00:14:38.839 "data_size": 0 00:14:38.839 } 00:14:38.839 ] 00:14:38.839 }' 00:14:38.839 21:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:38.839 21:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:39.097 21:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:39.355 [2024-07-14 21:13:50.721840] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:39.355 [2024-07-14 21:13:50.721905] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x37f24e234a00 00:14:39.355 [2024-07-14 21:13:50.721912] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:39.356 [2024-07-14 21:13:50.721931] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x37f24e297e20 00:14:39.356 [2024-07-14 21:13:50.721989] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x37f24e234a00 00:14:39.356 [2024-07-14 21:13:50.721994] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x37f24e234a00 00:14:39.356 [2024-07-14 21:13:50.722014] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:39.356 BaseBdev4 00:14:39.356 21:13:50 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:14:39.356 21:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:39.356 21:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:39.356 21:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:39.356 21:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:39.356 21:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:39.356 21:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:39.613 21:13:50 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:39.871 [ 00:14:39.871 { 00:14:39.871 "name": "BaseBdev4", 00:14:39.871 "aliases": [ 00:14:39.871 "f79c5ccb-4225-11ef-aa83-81fbc7dfef58" 00:14:39.871 ], 00:14:39.871 "product_name": "Malloc disk", 00:14:39.871 "block_size": 512, 00:14:39.871 "num_blocks": 65536, 00:14:39.871 "uuid": "f79c5ccb-4225-11ef-aa83-81fbc7dfef58", 00:14:39.871 "assigned_rate_limits": { 00:14:39.871 "rw_ios_per_sec": 0, 00:14:39.871 "rw_mbytes_per_sec": 0, 00:14:39.871 "r_mbytes_per_sec": 0, 00:14:39.871 "w_mbytes_per_sec": 0 00:14:39.871 }, 00:14:39.871 "claimed": true, 00:14:39.871 "claim_type": "exclusive_write", 00:14:39.871 "zoned": false, 00:14:39.871 "supported_io_types": { 00:14:39.871 "read": true, 00:14:39.871 "write": true, 00:14:39.871 "unmap": true, 00:14:39.871 "flush": true, 00:14:39.871 "reset": true, 00:14:39.871 "nvme_admin": false, 00:14:39.871 "nvme_io": false, 00:14:39.871 "nvme_io_md": false, 00:14:39.871 "write_zeroes": true, 00:14:39.871 "zcopy": true, 00:14:39.871 "get_zone_info": false, 00:14:39.871 "zone_management": false, 00:14:39.871 "zone_append": false, 00:14:39.871 "compare": false, 00:14:39.871 "compare_and_write": false, 00:14:39.871 "abort": true, 00:14:39.871 "seek_hole": false, 00:14:39.871 "seek_data": false, 00:14:39.871 "copy": true, 00:14:39.871 "nvme_iov_md": false 00:14:39.871 }, 00:14:39.871 "memory_domains": [ 00:14:39.871 { 00:14:39.871 "dma_device_id": "system", 00:14:39.871 "dma_device_type": 1 00:14:39.871 }, 00:14:39.871 { 00:14:39.871 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:39.871 "dma_device_type": 2 00:14:39.871 } 00:14:39.871 ], 00:14:39.871 "driver_specific": {} 00:14:39.871 } 00:14:39.871 ] 00:14:39.871 21:13:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:39.871 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:14:39.871 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:14:39.871 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:39.871 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:39.871 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:39.871 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:39.872 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:39.872 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:39.872 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:39.872 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:39.872 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:39.872 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:39.872 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:39.872 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:40.130 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:40.130 "name": "Existed_Raid", 00:14:40.130 "uuid": "f5bf0eb2-4225-11ef-aa83-81fbc7dfef58", 00:14:40.130 "strip_size_kb": 64, 00:14:40.130 "state": "online", 00:14:40.130 "raid_level": "concat", 00:14:40.130 "superblock": true, 00:14:40.130 "num_base_bdevs": 4, 00:14:40.130 "num_base_bdevs_discovered": 4, 00:14:40.130 "num_base_bdevs_operational": 4, 00:14:40.130 "base_bdevs_list": [ 00:14:40.130 { 00:14:40.130 "name": "BaseBdev1", 00:14:40.130 "uuid": "f4d7b1cd-4225-11ef-aa83-81fbc7dfef58", 00:14:40.130 "is_configured": true, 00:14:40.130 "data_offset": 2048, 00:14:40.130 "data_size": 63488 00:14:40.130 }, 00:14:40.130 { 00:14:40.130 "name": "BaseBdev2", 00:14:40.130 "uuid": "f6326d0a-4225-11ef-aa83-81fbc7dfef58", 00:14:40.130 "is_configured": true, 00:14:40.130 "data_offset": 2048, 00:14:40.130 "data_size": 63488 00:14:40.130 }, 00:14:40.130 { 00:14:40.130 "name": "BaseBdev3", 00:14:40.130 "uuid": "f6e98958-4225-11ef-aa83-81fbc7dfef58", 00:14:40.130 "is_configured": true, 00:14:40.130 "data_offset": 2048, 00:14:40.130 "data_size": 63488 00:14:40.130 }, 00:14:40.130 { 00:14:40.130 "name": "BaseBdev4", 00:14:40.130 "uuid": "f79c5ccb-4225-11ef-aa83-81fbc7dfef58", 00:14:40.130 "is_configured": true, 00:14:40.130 "data_offset": 2048, 00:14:40.130 "data_size": 63488 00:14:40.130 } 00:14:40.130 ] 00:14:40.130 }' 00:14:40.130 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:40.130 21:13:51 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:40.388 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:14:40.388 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:40.388 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:40.388 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:40.388 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:40.388 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:40.388 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:40.388 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:40.388 [2024-07-14 21:13:51.913789] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:40.388 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:40.388 "name": "Existed_Raid", 00:14:40.388 "aliases": [ 00:14:40.388 "f5bf0eb2-4225-11ef-aa83-81fbc7dfef58" 00:14:40.388 ], 00:14:40.388 "product_name": "Raid Volume", 00:14:40.388 "block_size": 512, 00:14:40.388 "num_blocks": 253952, 00:14:40.388 "uuid": "f5bf0eb2-4225-11ef-aa83-81fbc7dfef58", 00:14:40.388 "assigned_rate_limits": { 00:14:40.388 "rw_ios_per_sec": 0, 00:14:40.388 "rw_mbytes_per_sec": 0, 00:14:40.388 "r_mbytes_per_sec": 0, 00:14:40.388 "w_mbytes_per_sec": 0 00:14:40.388 }, 00:14:40.388 "claimed": false, 00:14:40.388 "zoned": false, 00:14:40.388 "supported_io_types": { 00:14:40.388 "read": true, 00:14:40.388 "write": true, 00:14:40.388 "unmap": true, 00:14:40.388 "flush": true, 00:14:40.388 "reset": true, 00:14:40.388 "nvme_admin": false, 00:14:40.388 "nvme_io": false, 00:14:40.388 "nvme_io_md": false, 00:14:40.388 "write_zeroes": true, 00:14:40.388 "zcopy": false, 00:14:40.388 "get_zone_info": false, 00:14:40.388 "zone_management": false, 00:14:40.388 "zone_append": false, 00:14:40.388 "compare": false, 00:14:40.388 "compare_and_write": false, 00:14:40.388 "abort": false, 00:14:40.388 "seek_hole": false, 00:14:40.388 "seek_data": false, 00:14:40.388 "copy": false, 00:14:40.388 "nvme_iov_md": false 00:14:40.388 }, 00:14:40.388 "memory_domains": [ 00:14:40.388 { 00:14:40.388 "dma_device_id": "system", 00:14:40.388 "dma_device_type": 1 00:14:40.388 }, 00:14:40.388 { 00:14:40.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.388 "dma_device_type": 2 00:14:40.388 }, 00:14:40.388 { 00:14:40.388 "dma_device_id": "system", 00:14:40.388 "dma_device_type": 1 00:14:40.388 }, 00:14:40.388 { 00:14:40.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.388 "dma_device_type": 2 00:14:40.388 }, 00:14:40.388 { 00:14:40.388 "dma_device_id": "system", 00:14:40.388 "dma_device_type": 1 00:14:40.388 }, 00:14:40.388 { 00:14:40.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.388 "dma_device_type": 2 00:14:40.388 }, 00:14:40.388 { 00:14:40.388 "dma_device_id": "system", 00:14:40.388 "dma_device_type": 1 00:14:40.388 }, 00:14:40.388 { 00:14:40.388 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.388 "dma_device_type": 2 00:14:40.388 } 00:14:40.388 ], 00:14:40.388 "driver_specific": { 00:14:40.388 "raid": { 00:14:40.388 "uuid": "f5bf0eb2-4225-11ef-aa83-81fbc7dfef58", 00:14:40.388 "strip_size_kb": 64, 00:14:40.388 "state": "online", 00:14:40.388 "raid_level": "concat", 00:14:40.388 "superblock": true, 00:14:40.388 "num_base_bdevs": 4, 00:14:40.388 "num_base_bdevs_discovered": 4, 00:14:40.388 "num_base_bdevs_operational": 4, 00:14:40.388 "base_bdevs_list": [ 00:14:40.388 { 00:14:40.388 "name": "BaseBdev1", 00:14:40.388 "uuid": "f4d7b1cd-4225-11ef-aa83-81fbc7dfef58", 00:14:40.388 "is_configured": true, 00:14:40.388 "data_offset": 2048, 00:14:40.388 "data_size": 63488 00:14:40.388 }, 00:14:40.388 { 00:14:40.388 "name": "BaseBdev2", 00:14:40.388 "uuid": "f6326d0a-4225-11ef-aa83-81fbc7dfef58", 00:14:40.388 "is_configured": true, 00:14:40.388 "data_offset": 2048, 00:14:40.388 "data_size": 63488 00:14:40.388 }, 00:14:40.388 { 00:14:40.388 "name": "BaseBdev3", 00:14:40.388 "uuid": "f6e98958-4225-11ef-aa83-81fbc7dfef58", 00:14:40.388 "is_configured": true, 00:14:40.388 "data_offset": 2048, 00:14:40.388 "data_size": 63488 00:14:40.388 }, 00:14:40.388 { 00:14:40.388 "name": "BaseBdev4", 00:14:40.388 "uuid": "f79c5ccb-4225-11ef-aa83-81fbc7dfef58", 00:14:40.388 "is_configured": true, 00:14:40.388 "data_offset": 2048, 00:14:40.388 "data_size": 63488 00:14:40.388 } 00:14:40.388 ] 00:14:40.388 } 00:14:40.388 } 00:14:40.388 }' 00:14:40.388 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:40.645 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:14:40.645 BaseBdev2 00:14:40.645 BaseBdev3 00:14:40.645 BaseBdev4' 00:14:40.645 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:40.645 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:14:40.645 21:13:51 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:40.904 "name": "BaseBdev1", 00:14:40.904 "aliases": [ 00:14:40.904 "f4d7b1cd-4225-11ef-aa83-81fbc7dfef58" 00:14:40.904 ], 00:14:40.904 "product_name": "Malloc disk", 00:14:40.904 "block_size": 512, 00:14:40.904 "num_blocks": 65536, 00:14:40.904 "uuid": "f4d7b1cd-4225-11ef-aa83-81fbc7dfef58", 00:14:40.904 "assigned_rate_limits": { 00:14:40.904 "rw_ios_per_sec": 0, 00:14:40.904 "rw_mbytes_per_sec": 0, 00:14:40.904 "r_mbytes_per_sec": 0, 00:14:40.904 "w_mbytes_per_sec": 0 00:14:40.904 }, 00:14:40.904 "claimed": true, 00:14:40.904 "claim_type": "exclusive_write", 00:14:40.904 "zoned": false, 00:14:40.904 "supported_io_types": { 00:14:40.904 "read": true, 00:14:40.904 "write": true, 00:14:40.904 "unmap": true, 00:14:40.904 "flush": true, 00:14:40.904 "reset": true, 00:14:40.904 "nvme_admin": false, 00:14:40.904 "nvme_io": false, 00:14:40.904 "nvme_io_md": false, 00:14:40.904 "write_zeroes": true, 00:14:40.904 "zcopy": true, 00:14:40.904 "get_zone_info": false, 00:14:40.904 "zone_management": false, 00:14:40.904 "zone_append": false, 00:14:40.904 "compare": false, 00:14:40.904 "compare_and_write": false, 00:14:40.904 "abort": true, 00:14:40.904 "seek_hole": false, 00:14:40.904 "seek_data": false, 00:14:40.904 "copy": true, 00:14:40.904 "nvme_iov_md": false 00:14:40.904 }, 00:14:40.904 "memory_domains": [ 00:14:40.904 { 00:14:40.904 "dma_device_id": "system", 00:14:40.904 "dma_device_type": 1 00:14:40.904 }, 00:14:40.904 { 00:14:40.904 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:40.904 "dma_device_type": 2 00:14:40.904 } 00:14:40.904 ], 00:14:40.904 "driver_specific": {} 00:14:40.904 }' 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:40.904 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:41.163 "name": "BaseBdev2", 00:14:41.163 "aliases": [ 00:14:41.163 "f6326d0a-4225-11ef-aa83-81fbc7dfef58" 00:14:41.163 ], 00:14:41.163 "product_name": "Malloc disk", 00:14:41.163 "block_size": 512, 00:14:41.163 "num_blocks": 65536, 00:14:41.163 "uuid": "f6326d0a-4225-11ef-aa83-81fbc7dfef58", 00:14:41.163 "assigned_rate_limits": { 00:14:41.163 "rw_ios_per_sec": 0, 00:14:41.163 "rw_mbytes_per_sec": 0, 00:14:41.163 "r_mbytes_per_sec": 0, 00:14:41.163 "w_mbytes_per_sec": 0 00:14:41.163 }, 00:14:41.163 "claimed": true, 00:14:41.163 "claim_type": "exclusive_write", 00:14:41.163 "zoned": false, 00:14:41.163 "supported_io_types": { 00:14:41.163 "read": true, 00:14:41.163 "write": true, 00:14:41.163 "unmap": true, 00:14:41.163 "flush": true, 00:14:41.163 "reset": true, 00:14:41.163 "nvme_admin": false, 00:14:41.163 "nvme_io": false, 00:14:41.163 "nvme_io_md": false, 00:14:41.163 "write_zeroes": true, 00:14:41.163 "zcopy": true, 00:14:41.163 "get_zone_info": false, 00:14:41.163 "zone_management": false, 00:14:41.163 "zone_append": false, 00:14:41.163 "compare": false, 00:14:41.163 "compare_and_write": false, 00:14:41.163 "abort": true, 00:14:41.163 "seek_hole": false, 00:14:41.163 "seek_data": false, 00:14:41.163 "copy": true, 00:14:41.163 "nvme_iov_md": false 00:14:41.163 }, 00:14:41.163 "memory_domains": [ 00:14:41.163 { 00:14:41.163 "dma_device_id": "system", 00:14:41.163 "dma_device_type": 1 00:14:41.163 }, 00:14:41.163 { 00:14:41.163 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.163 "dma_device_type": 2 00:14:41.163 } 00:14:41.163 ], 00:14:41.163 "driver_specific": {} 00:14:41.163 }' 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:41.163 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:41.422 "name": "BaseBdev3", 00:14:41.422 "aliases": [ 00:14:41.422 "f6e98958-4225-11ef-aa83-81fbc7dfef58" 00:14:41.422 ], 00:14:41.422 "product_name": "Malloc disk", 00:14:41.422 "block_size": 512, 00:14:41.422 "num_blocks": 65536, 00:14:41.422 "uuid": "f6e98958-4225-11ef-aa83-81fbc7dfef58", 00:14:41.422 "assigned_rate_limits": { 00:14:41.422 "rw_ios_per_sec": 0, 00:14:41.422 "rw_mbytes_per_sec": 0, 00:14:41.422 "r_mbytes_per_sec": 0, 00:14:41.422 "w_mbytes_per_sec": 0 00:14:41.422 }, 00:14:41.422 "claimed": true, 00:14:41.422 "claim_type": "exclusive_write", 00:14:41.422 "zoned": false, 00:14:41.422 "supported_io_types": { 00:14:41.422 "read": true, 00:14:41.422 "write": true, 00:14:41.422 "unmap": true, 00:14:41.422 "flush": true, 00:14:41.422 "reset": true, 00:14:41.422 "nvme_admin": false, 00:14:41.422 "nvme_io": false, 00:14:41.422 "nvme_io_md": false, 00:14:41.422 "write_zeroes": true, 00:14:41.422 "zcopy": true, 00:14:41.422 "get_zone_info": false, 00:14:41.422 "zone_management": false, 00:14:41.422 "zone_append": false, 00:14:41.422 "compare": false, 00:14:41.422 "compare_and_write": false, 00:14:41.422 "abort": true, 00:14:41.422 "seek_hole": false, 00:14:41.422 "seek_data": false, 00:14:41.422 "copy": true, 00:14:41.422 "nvme_iov_md": false 00:14:41.422 }, 00:14:41.422 "memory_domains": [ 00:14:41.422 { 00:14:41.422 "dma_device_id": "system", 00:14:41.422 "dma_device_type": 1 00:14:41.422 }, 00:14:41.422 { 00:14:41.422 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.422 "dma_device_type": 2 00:14:41.422 } 00:14:41.422 ], 00:14:41.422 "driver_specific": {} 00:14:41.422 }' 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:41.422 21:13:52 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:41.681 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:41.681 "name": "BaseBdev4", 00:14:41.681 "aliases": [ 00:14:41.681 "f79c5ccb-4225-11ef-aa83-81fbc7dfef58" 00:14:41.681 ], 00:14:41.681 "product_name": "Malloc disk", 00:14:41.681 "block_size": 512, 00:14:41.681 "num_blocks": 65536, 00:14:41.681 "uuid": "f79c5ccb-4225-11ef-aa83-81fbc7dfef58", 00:14:41.681 "assigned_rate_limits": { 00:14:41.681 "rw_ios_per_sec": 0, 00:14:41.681 "rw_mbytes_per_sec": 0, 00:14:41.681 "r_mbytes_per_sec": 0, 00:14:41.681 "w_mbytes_per_sec": 0 00:14:41.681 }, 00:14:41.681 "claimed": true, 00:14:41.681 "claim_type": "exclusive_write", 00:14:41.681 "zoned": false, 00:14:41.681 "supported_io_types": { 00:14:41.681 "read": true, 00:14:41.681 "write": true, 00:14:41.681 "unmap": true, 00:14:41.681 "flush": true, 00:14:41.681 "reset": true, 00:14:41.681 "nvme_admin": false, 00:14:41.681 "nvme_io": false, 00:14:41.681 "nvme_io_md": false, 00:14:41.681 "write_zeroes": true, 00:14:41.681 "zcopy": true, 00:14:41.681 "get_zone_info": false, 00:14:41.681 "zone_management": false, 00:14:41.681 "zone_append": false, 00:14:41.681 "compare": false, 00:14:41.681 "compare_and_write": false, 00:14:41.681 "abort": true, 00:14:41.681 "seek_hole": false, 00:14:41.681 "seek_data": false, 00:14:41.681 "copy": true, 00:14:41.681 "nvme_iov_md": false 00:14:41.681 }, 00:14:41.681 "memory_domains": [ 00:14:41.681 { 00:14:41.681 "dma_device_id": "system", 00:14:41.681 "dma_device_type": 1 00:14:41.681 }, 00:14:41.681 { 00:14:41.681 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:41.681 "dma_device_type": 2 00:14:41.681 } 00:14:41.681 ], 00:14:41.681 "driver_specific": {} 00:14:41.681 }' 00:14:41.681 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:41.681 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:41.681 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:41.681 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:41.681 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:41.681 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:41.681 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:41.681 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:41.681 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:41.681 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:41.681 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:41.681 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:41.681 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:41.939 [2024-07-14 21:13:53.417798] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:41.939 [2024-07-14 21:13:53.417817] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:41.939 [2024-07-14 21:13:53.417836] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy concat 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@215 -- # return 1 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@277 -- # expected_state=offline 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid offline concat 64 3 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=offline 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:41.939 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.197 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:42.197 "name": "Existed_Raid", 00:14:42.197 "uuid": "f5bf0eb2-4225-11ef-aa83-81fbc7dfef58", 00:14:42.197 "strip_size_kb": 64, 00:14:42.197 "state": "offline", 00:14:42.197 "raid_level": "concat", 00:14:42.197 "superblock": true, 00:14:42.197 "num_base_bdevs": 4, 00:14:42.197 "num_base_bdevs_discovered": 3, 00:14:42.197 "num_base_bdevs_operational": 3, 00:14:42.197 "base_bdevs_list": [ 00:14:42.197 { 00:14:42.197 "name": null, 00:14:42.197 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:42.197 "is_configured": false, 00:14:42.197 "data_offset": 2048, 00:14:42.197 "data_size": 63488 00:14:42.197 }, 00:14:42.197 { 00:14:42.197 "name": "BaseBdev2", 00:14:42.197 "uuid": "f6326d0a-4225-11ef-aa83-81fbc7dfef58", 00:14:42.197 "is_configured": true, 00:14:42.197 "data_offset": 2048, 00:14:42.197 "data_size": 63488 00:14:42.197 }, 00:14:42.197 { 00:14:42.197 "name": "BaseBdev3", 00:14:42.197 "uuid": "f6e98958-4225-11ef-aa83-81fbc7dfef58", 00:14:42.197 "is_configured": true, 00:14:42.197 "data_offset": 2048, 00:14:42.197 "data_size": 63488 00:14:42.197 }, 00:14:42.197 { 00:14:42.197 "name": "BaseBdev4", 00:14:42.197 "uuid": "f79c5ccb-4225-11ef-aa83-81fbc7dfef58", 00:14:42.197 "is_configured": true, 00:14:42.197 "data_offset": 2048, 00:14:42.197 "data_size": 63488 00:14:42.197 } 00:14:42.197 ] 00:14:42.197 }' 00:14:42.197 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:42.197 21:13:53 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:42.456 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:14:42.456 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:42.456 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:42.456 21:13:53 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:42.714 21:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:42.714 21:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:42.714 21:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:14:42.972 [2024-07-14 21:13:54.458093] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:42.973 21:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:42.973 21:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:42.973 21:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:42.973 21:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.232 21:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:43.232 21:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:43.232 21:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:14:43.490 [2024-07-14 21:13:54.874826] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:43.490 21:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:43.490 21:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:43.490 21:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:14:43.490 21:13:54 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:43.748 21:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:14:43.748 21:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:14:43.749 21:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:14:44.007 [2024-07-14 21:13:55.323076] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:14:44.007 [2024-07-14 21:13:55.323101] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x37f24e234a00 name Existed_Raid, state offline 00:14:44.007 21:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:14:44.007 21:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:14:44.007 21:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:44.007 21:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:14:44.266 21:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:14:44.266 21:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:14:44.266 21:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:14:44.266 21:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:14:44.266 21:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:44.266 21:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:14:44.524 BaseBdev2 00:14:44.524 21:13:55 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:14:44.524 21:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:14:44.524 21:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:44.524 21:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:44.524 21:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:44.524 21:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:44.524 21:13:55 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:44.782 21:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:14:44.782 [ 00:14:44.782 { 00:14:44.782 "name": "BaseBdev2", 00:14:44.782 "aliases": [ 00:14:44.782 "faaf5541-4225-11ef-aa83-81fbc7dfef58" 00:14:44.782 ], 00:14:44.782 "product_name": "Malloc disk", 00:14:44.782 "block_size": 512, 00:14:44.782 "num_blocks": 65536, 00:14:44.782 "uuid": "faaf5541-4225-11ef-aa83-81fbc7dfef58", 00:14:44.782 "assigned_rate_limits": { 00:14:44.782 "rw_ios_per_sec": 0, 00:14:44.782 "rw_mbytes_per_sec": 0, 00:14:44.782 "r_mbytes_per_sec": 0, 00:14:44.782 "w_mbytes_per_sec": 0 00:14:44.782 }, 00:14:44.782 "claimed": false, 00:14:44.782 "zoned": false, 00:14:44.782 "supported_io_types": { 00:14:44.782 "read": true, 00:14:44.782 "write": true, 00:14:44.782 "unmap": true, 00:14:44.782 "flush": true, 00:14:44.782 "reset": true, 00:14:44.783 "nvme_admin": false, 00:14:44.783 "nvme_io": false, 00:14:44.783 "nvme_io_md": false, 00:14:44.783 "write_zeroes": true, 00:14:44.783 "zcopy": true, 00:14:44.783 "get_zone_info": false, 00:14:44.783 "zone_management": false, 00:14:44.783 "zone_append": false, 00:14:44.783 "compare": false, 00:14:44.783 "compare_and_write": false, 00:14:44.783 "abort": true, 00:14:44.783 "seek_hole": false, 00:14:44.783 "seek_data": false, 00:14:44.783 "copy": true, 00:14:44.783 "nvme_iov_md": false 00:14:44.783 }, 00:14:44.783 "memory_domains": [ 00:14:44.783 { 00:14:44.783 "dma_device_id": "system", 00:14:44.783 "dma_device_type": 1 00:14:44.783 }, 00:14:44.783 { 00:14:44.783 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:44.783 "dma_device_type": 2 00:14:44.783 } 00:14:44.783 ], 00:14:44.783 "driver_specific": {} 00:14:44.783 } 00:14:44.783 ] 00:14:44.783 21:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:44.783 21:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:44.783 21:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:44.783 21:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:14:45.049 BaseBdev3 00:14:45.049 21:13:56 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:14:45.049 21:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:14:45.049 21:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:45.049 21:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:45.049 21:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:45.049 21:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:45.049 21:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:45.324 21:13:56 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:14:45.582 [ 00:14:45.582 { 00:14:45.582 "name": "BaseBdev3", 00:14:45.582 "aliases": [ 00:14:45.582 "fb0d5364-4225-11ef-aa83-81fbc7dfef58" 00:14:45.582 ], 00:14:45.582 "product_name": "Malloc disk", 00:14:45.582 "block_size": 512, 00:14:45.582 "num_blocks": 65536, 00:14:45.582 "uuid": "fb0d5364-4225-11ef-aa83-81fbc7dfef58", 00:14:45.582 "assigned_rate_limits": { 00:14:45.582 "rw_ios_per_sec": 0, 00:14:45.582 "rw_mbytes_per_sec": 0, 00:14:45.582 "r_mbytes_per_sec": 0, 00:14:45.582 "w_mbytes_per_sec": 0 00:14:45.582 }, 00:14:45.582 "claimed": false, 00:14:45.582 "zoned": false, 00:14:45.582 "supported_io_types": { 00:14:45.582 "read": true, 00:14:45.582 "write": true, 00:14:45.582 "unmap": true, 00:14:45.582 "flush": true, 00:14:45.582 "reset": true, 00:14:45.582 "nvme_admin": false, 00:14:45.582 "nvme_io": false, 00:14:45.582 "nvme_io_md": false, 00:14:45.582 "write_zeroes": true, 00:14:45.582 "zcopy": true, 00:14:45.582 "get_zone_info": false, 00:14:45.582 "zone_management": false, 00:14:45.582 "zone_append": false, 00:14:45.582 "compare": false, 00:14:45.582 "compare_and_write": false, 00:14:45.582 "abort": true, 00:14:45.582 "seek_hole": false, 00:14:45.582 "seek_data": false, 00:14:45.582 "copy": true, 00:14:45.582 "nvme_iov_md": false 00:14:45.582 }, 00:14:45.582 "memory_domains": [ 00:14:45.582 { 00:14:45.582 "dma_device_id": "system", 00:14:45.582 "dma_device_type": 1 00:14:45.582 }, 00:14:45.582 { 00:14:45.582 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:45.582 "dma_device_type": 2 00:14:45.582 } 00:14:45.582 ], 00:14:45.582 "driver_specific": {} 00:14:45.582 } 00:14:45.582 ] 00:14:45.582 21:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:45.582 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:45.582 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:45.582 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:14:45.841 BaseBdev4 00:14:45.841 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:14:45.841 21:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:14:45.841 21:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:45.841 21:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:45.841 21:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:45.841 21:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:45.841 21:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:46.099 21:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:14:46.357 [ 00:14:46.357 { 00:14:46.357 "name": "BaseBdev4", 00:14:46.357 "aliases": [ 00:14:46.357 "fb87659b-4225-11ef-aa83-81fbc7dfef58" 00:14:46.357 ], 00:14:46.357 "product_name": "Malloc disk", 00:14:46.357 "block_size": 512, 00:14:46.357 "num_blocks": 65536, 00:14:46.357 "uuid": "fb87659b-4225-11ef-aa83-81fbc7dfef58", 00:14:46.357 "assigned_rate_limits": { 00:14:46.357 "rw_ios_per_sec": 0, 00:14:46.357 "rw_mbytes_per_sec": 0, 00:14:46.357 "r_mbytes_per_sec": 0, 00:14:46.357 "w_mbytes_per_sec": 0 00:14:46.357 }, 00:14:46.357 "claimed": false, 00:14:46.357 "zoned": false, 00:14:46.357 "supported_io_types": { 00:14:46.357 "read": true, 00:14:46.357 "write": true, 00:14:46.357 "unmap": true, 00:14:46.357 "flush": true, 00:14:46.357 "reset": true, 00:14:46.357 "nvme_admin": false, 00:14:46.357 "nvme_io": false, 00:14:46.357 "nvme_io_md": false, 00:14:46.357 "write_zeroes": true, 00:14:46.357 "zcopy": true, 00:14:46.357 "get_zone_info": false, 00:14:46.357 "zone_management": false, 00:14:46.357 "zone_append": false, 00:14:46.357 "compare": false, 00:14:46.357 "compare_and_write": false, 00:14:46.357 "abort": true, 00:14:46.357 "seek_hole": false, 00:14:46.357 "seek_data": false, 00:14:46.357 "copy": true, 00:14:46.357 "nvme_iov_md": false 00:14:46.357 }, 00:14:46.357 "memory_domains": [ 00:14:46.357 { 00:14:46.357 "dma_device_id": "system", 00:14:46.357 "dma_device_type": 1 00:14:46.357 }, 00:14:46.357 { 00:14:46.358 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:46.358 "dma_device_type": 2 00:14:46.358 } 00:14:46.358 ], 00:14:46.358 "driver_specific": {} 00:14:46.358 } 00:14:46.358 ] 00:14:46.358 21:13:57 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:46.358 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:14:46.358 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:14:46.358 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -s -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:14:46.616 [2024-07-14 21:13:57.971277] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:14:46.616 [2024-07-14 21:13:57.971324] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:14:46.616 [2024-07-14 21:13:57.971331] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:46.616 [2024-07-14 21:13:57.971633] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:46.616 [2024-07-14 21:13:57.971645] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:14:46.616 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:46.616 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:46.616 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:46.616 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:46.616 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:46.616 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:46.616 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:46.616 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:46.616 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:46.616 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:46.616 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:46.616 21:13:57 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:46.874 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:46.874 "name": "Existed_Raid", 00:14:46.874 "uuid": "fbee8cdc-4225-11ef-aa83-81fbc7dfef58", 00:14:46.874 "strip_size_kb": 64, 00:14:46.875 "state": "configuring", 00:14:46.875 "raid_level": "concat", 00:14:46.875 "superblock": true, 00:14:46.875 "num_base_bdevs": 4, 00:14:46.875 "num_base_bdevs_discovered": 3, 00:14:46.875 "num_base_bdevs_operational": 4, 00:14:46.875 "base_bdevs_list": [ 00:14:46.875 { 00:14:46.875 "name": "BaseBdev1", 00:14:46.875 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:46.875 "is_configured": false, 00:14:46.875 "data_offset": 0, 00:14:46.875 "data_size": 0 00:14:46.875 }, 00:14:46.875 { 00:14:46.875 "name": "BaseBdev2", 00:14:46.875 "uuid": "faaf5541-4225-11ef-aa83-81fbc7dfef58", 00:14:46.875 "is_configured": true, 00:14:46.875 "data_offset": 2048, 00:14:46.875 "data_size": 63488 00:14:46.875 }, 00:14:46.875 { 00:14:46.875 "name": "BaseBdev3", 00:14:46.875 "uuid": "fb0d5364-4225-11ef-aa83-81fbc7dfef58", 00:14:46.875 "is_configured": true, 00:14:46.875 "data_offset": 2048, 00:14:46.875 "data_size": 63488 00:14:46.875 }, 00:14:46.875 { 00:14:46.875 "name": "BaseBdev4", 00:14:46.875 "uuid": "fb87659b-4225-11ef-aa83-81fbc7dfef58", 00:14:46.875 "is_configured": true, 00:14:46.875 "data_offset": 2048, 00:14:46.875 "data_size": 63488 00:14:46.875 } 00:14:46.875 ] 00:14:46.875 }' 00:14:46.875 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:46.875 21:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.133 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:14:47.391 [2024-07-14 21:13:58.751272] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:14:47.391 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:47.391 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:47.391 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:47.391 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:47.391 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:47.391 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:47.391 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:47.391 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:47.391 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:47.391 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:47.391 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.391 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:47.650 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:47.650 "name": "Existed_Raid", 00:14:47.650 "uuid": "fbee8cdc-4225-11ef-aa83-81fbc7dfef58", 00:14:47.650 "strip_size_kb": 64, 00:14:47.650 "state": "configuring", 00:14:47.650 "raid_level": "concat", 00:14:47.650 "superblock": true, 00:14:47.650 "num_base_bdevs": 4, 00:14:47.650 "num_base_bdevs_discovered": 2, 00:14:47.650 "num_base_bdevs_operational": 4, 00:14:47.650 "base_bdevs_list": [ 00:14:47.650 { 00:14:47.650 "name": "BaseBdev1", 00:14:47.650 "uuid": "00000000-0000-0000-0000-000000000000", 00:14:47.650 "is_configured": false, 00:14:47.650 "data_offset": 0, 00:14:47.650 "data_size": 0 00:14:47.650 }, 00:14:47.650 { 00:14:47.650 "name": null, 00:14:47.650 "uuid": "faaf5541-4225-11ef-aa83-81fbc7dfef58", 00:14:47.650 "is_configured": false, 00:14:47.650 "data_offset": 2048, 00:14:47.650 "data_size": 63488 00:14:47.650 }, 00:14:47.650 { 00:14:47.650 "name": "BaseBdev3", 00:14:47.650 "uuid": "fb0d5364-4225-11ef-aa83-81fbc7dfef58", 00:14:47.650 "is_configured": true, 00:14:47.650 "data_offset": 2048, 00:14:47.650 "data_size": 63488 00:14:47.650 }, 00:14:47.650 { 00:14:47.650 "name": "BaseBdev4", 00:14:47.650 "uuid": "fb87659b-4225-11ef-aa83-81fbc7dfef58", 00:14:47.650 "is_configured": true, 00:14:47.650 "data_offset": 2048, 00:14:47.650 "data_size": 63488 00:14:47.650 } 00:14:47.650 ] 00:14:47.650 }' 00:14:47.650 21:13:58 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:47.650 21:13:58 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:47.908 21:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:47.908 21:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:48.166 21:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:14:48.166 21:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:14:48.166 [2024-07-14 21:13:59.711397] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:14:48.425 BaseBdev1 00:14:48.425 21:13:59 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:14:48.425 21:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:14:48.425 21:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:48.425 21:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:48.425 21:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:48.425 21:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:48.425 21:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:48.425 21:13:59 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:14:48.684 [ 00:14:48.684 { 00:14:48.684 "name": "BaseBdev1", 00:14:48.684 "aliases": [ 00:14:48.684 "fcf80ee1-4225-11ef-aa83-81fbc7dfef58" 00:14:48.684 ], 00:14:48.684 "product_name": "Malloc disk", 00:14:48.684 "block_size": 512, 00:14:48.684 "num_blocks": 65536, 00:14:48.684 "uuid": "fcf80ee1-4225-11ef-aa83-81fbc7dfef58", 00:14:48.684 "assigned_rate_limits": { 00:14:48.684 "rw_ios_per_sec": 0, 00:14:48.684 "rw_mbytes_per_sec": 0, 00:14:48.684 "r_mbytes_per_sec": 0, 00:14:48.684 "w_mbytes_per_sec": 0 00:14:48.684 }, 00:14:48.684 "claimed": true, 00:14:48.684 "claim_type": "exclusive_write", 00:14:48.684 "zoned": false, 00:14:48.684 "supported_io_types": { 00:14:48.684 "read": true, 00:14:48.684 "write": true, 00:14:48.684 "unmap": true, 00:14:48.684 "flush": true, 00:14:48.684 "reset": true, 00:14:48.684 "nvme_admin": false, 00:14:48.684 "nvme_io": false, 00:14:48.684 "nvme_io_md": false, 00:14:48.684 "write_zeroes": true, 00:14:48.684 "zcopy": true, 00:14:48.684 "get_zone_info": false, 00:14:48.684 "zone_management": false, 00:14:48.684 "zone_append": false, 00:14:48.684 "compare": false, 00:14:48.684 "compare_and_write": false, 00:14:48.684 "abort": true, 00:14:48.684 "seek_hole": false, 00:14:48.684 "seek_data": false, 00:14:48.684 "copy": true, 00:14:48.684 "nvme_iov_md": false 00:14:48.684 }, 00:14:48.684 "memory_domains": [ 00:14:48.684 { 00:14:48.684 "dma_device_id": "system", 00:14:48.684 "dma_device_type": 1 00:14:48.684 }, 00:14:48.684 { 00:14:48.684 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:48.684 "dma_device_type": 2 00:14:48.684 } 00:14:48.684 ], 00:14:48.684 "driver_specific": {} 00:14:48.684 } 00:14:48.684 ] 00:14:48.684 21:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:48.684 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:48.684 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:48.684 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:48.684 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:48.684 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:48.684 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:48.684 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:48.684 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:48.684 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:48.684 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:48.684 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:48.684 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:48.943 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:48.943 "name": "Existed_Raid", 00:14:48.943 "uuid": "fbee8cdc-4225-11ef-aa83-81fbc7dfef58", 00:14:48.943 "strip_size_kb": 64, 00:14:48.943 "state": "configuring", 00:14:48.943 "raid_level": "concat", 00:14:48.943 "superblock": true, 00:14:48.943 "num_base_bdevs": 4, 00:14:48.943 "num_base_bdevs_discovered": 3, 00:14:48.943 "num_base_bdevs_operational": 4, 00:14:48.943 "base_bdevs_list": [ 00:14:48.943 { 00:14:48.943 "name": "BaseBdev1", 00:14:48.943 "uuid": "fcf80ee1-4225-11ef-aa83-81fbc7dfef58", 00:14:48.943 "is_configured": true, 00:14:48.943 "data_offset": 2048, 00:14:48.943 "data_size": 63488 00:14:48.943 }, 00:14:48.943 { 00:14:48.943 "name": null, 00:14:48.943 "uuid": "faaf5541-4225-11ef-aa83-81fbc7dfef58", 00:14:48.943 "is_configured": false, 00:14:48.943 "data_offset": 2048, 00:14:48.943 "data_size": 63488 00:14:48.943 }, 00:14:48.943 { 00:14:48.943 "name": "BaseBdev3", 00:14:48.943 "uuid": "fb0d5364-4225-11ef-aa83-81fbc7dfef58", 00:14:48.943 "is_configured": true, 00:14:48.943 "data_offset": 2048, 00:14:48.943 "data_size": 63488 00:14:48.943 }, 00:14:48.943 { 00:14:48.943 "name": "BaseBdev4", 00:14:48.943 "uuid": "fb87659b-4225-11ef-aa83-81fbc7dfef58", 00:14:48.943 "is_configured": true, 00:14:48.943 "data_offset": 2048, 00:14:48.943 "data_size": 63488 00:14:48.943 } 00:14:48.943 ] 00:14:48.943 }' 00:14:48.943 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:48.943 21:14:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:49.202 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.202 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:49.460 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:14:49.460 21:14:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:14:49.719 [2024-07-14 21:14:01.151336] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:14:49.719 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:49.719 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:49.719 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:49.719 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:49.719 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:49.719 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:49.719 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:49.719 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:49.719 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:49.719 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:49.719 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:49.719 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:49.978 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:49.978 "name": "Existed_Raid", 00:14:49.978 "uuid": "fbee8cdc-4225-11ef-aa83-81fbc7dfef58", 00:14:49.978 "strip_size_kb": 64, 00:14:49.978 "state": "configuring", 00:14:49.978 "raid_level": "concat", 00:14:49.978 "superblock": true, 00:14:49.978 "num_base_bdevs": 4, 00:14:49.978 "num_base_bdevs_discovered": 2, 00:14:49.978 "num_base_bdevs_operational": 4, 00:14:49.978 "base_bdevs_list": [ 00:14:49.978 { 00:14:49.978 "name": "BaseBdev1", 00:14:49.978 "uuid": "fcf80ee1-4225-11ef-aa83-81fbc7dfef58", 00:14:49.978 "is_configured": true, 00:14:49.978 "data_offset": 2048, 00:14:49.978 "data_size": 63488 00:14:49.978 }, 00:14:49.978 { 00:14:49.978 "name": null, 00:14:49.978 "uuid": "faaf5541-4225-11ef-aa83-81fbc7dfef58", 00:14:49.978 "is_configured": false, 00:14:49.978 "data_offset": 2048, 00:14:49.978 "data_size": 63488 00:14:49.978 }, 00:14:49.978 { 00:14:49.978 "name": null, 00:14:49.978 "uuid": "fb0d5364-4225-11ef-aa83-81fbc7dfef58", 00:14:49.978 "is_configured": false, 00:14:49.978 "data_offset": 2048, 00:14:49.978 "data_size": 63488 00:14:49.978 }, 00:14:49.978 { 00:14:49.978 "name": "BaseBdev4", 00:14:49.978 "uuid": "fb87659b-4225-11ef-aa83-81fbc7dfef58", 00:14:49.978 "is_configured": true, 00:14:49.978 "data_offset": 2048, 00:14:49.978 "data_size": 63488 00:14:49.978 } 00:14:49.978 ] 00:14:49.978 }' 00:14:49.978 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:49.978 21:14:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:50.236 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.236 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:50.494 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:14:50.494 21:14:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:14:50.753 [2024-07-14 21:14:02.107360] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:14:50.753 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:50.753 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:50.753 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:50.753 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:50.753 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:50.753 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:50.753 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:50.753 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:50.753 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:50.753 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:50.753 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:50.753 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:51.011 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:51.011 "name": "Existed_Raid", 00:14:51.011 "uuid": "fbee8cdc-4225-11ef-aa83-81fbc7dfef58", 00:14:51.011 "strip_size_kb": 64, 00:14:51.011 "state": "configuring", 00:14:51.011 "raid_level": "concat", 00:14:51.011 "superblock": true, 00:14:51.011 "num_base_bdevs": 4, 00:14:51.011 "num_base_bdevs_discovered": 3, 00:14:51.011 "num_base_bdevs_operational": 4, 00:14:51.011 "base_bdevs_list": [ 00:14:51.011 { 00:14:51.011 "name": "BaseBdev1", 00:14:51.011 "uuid": "fcf80ee1-4225-11ef-aa83-81fbc7dfef58", 00:14:51.011 "is_configured": true, 00:14:51.011 "data_offset": 2048, 00:14:51.011 "data_size": 63488 00:14:51.011 }, 00:14:51.011 { 00:14:51.011 "name": null, 00:14:51.011 "uuid": "faaf5541-4225-11ef-aa83-81fbc7dfef58", 00:14:51.011 "is_configured": false, 00:14:51.011 "data_offset": 2048, 00:14:51.011 "data_size": 63488 00:14:51.011 }, 00:14:51.011 { 00:14:51.011 "name": "BaseBdev3", 00:14:51.011 "uuid": "fb0d5364-4225-11ef-aa83-81fbc7dfef58", 00:14:51.011 "is_configured": true, 00:14:51.011 "data_offset": 2048, 00:14:51.011 "data_size": 63488 00:14:51.011 }, 00:14:51.011 { 00:14:51.011 "name": "BaseBdev4", 00:14:51.011 "uuid": "fb87659b-4225-11ef-aa83-81fbc7dfef58", 00:14:51.011 "is_configured": true, 00:14:51.011 "data_offset": 2048, 00:14:51.011 "data_size": 63488 00:14:51.011 } 00:14:51.011 ] 00:14:51.011 }' 00:14:51.011 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:51.011 21:14:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:51.269 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.269 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:14:51.528 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:14:51.528 21:14:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:14:51.787 [2024-07-14 21:14:03.163392] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:14:51.787 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:51.787 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:51.787 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:51.787 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:51.787 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:51.787 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:51.787 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:51.787 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:51.787 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:51.787 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:51.787 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:51.787 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.045 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:52.045 "name": "Existed_Raid", 00:14:52.046 "uuid": "fbee8cdc-4225-11ef-aa83-81fbc7dfef58", 00:14:52.046 "strip_size_kb": 64, 00:14:52.046 "state": "configuring", 00:14:52.046 "raid_level": "concat", 00:14:52.046 "superblock": true, 00:14:52.046 "num_base_bdevs": 4, 00:14:52.046 "num_base_bdevs_discovered": 2, 00:14:52.046 "num_base_bdevs_operational": 4, 00:14:52.046 "base_bdevs_list": [ 00:14:52.046 { 00:14:52.046 "name": null, 00:14:52.046 "uuid": "fcf80ee1-4225-11ef-aa83-81fbc7dfef58", 00:14:52.046 "is_configured": false, 00:14:52.046 "data_offset": 2048, 00:14:52.046 "data_size": 63488 00:14:52.046 }, 00:14:52.046 { 00:14:52.046 "name": null, 00:14:52.046 "uuid": "faaf5541-4225-11ef-aa83-81fbc7dfef58", 00:14:52.046 "is_configured": false, 00:14:52.046 "data_offset": 2048, 00:14:52.046 "data_size": 63488 00:14:52.046 }, 00:14:52.046 { 00:14:52.046 "name": "BaseBdev3", 00:14:52.046 "uuid": "fb0d5364-4225-11ef-aa83-81fbc7dfef58", 00:14:52.046 "is_configured": true, 00:14:52.046 "data_offset": 2048, 00:14:52.046 "data_size": 63488 00:14:52.046 }, 00:14:52.046 { 00:14:52.046 "name": "BaseBdev4", 00:14:52.046 "uuid": "fb87659b-4225-11ef-aa83-81fbc7dfef58", 00:14:52.046 "is_configured": true, 00:14:52.046 "data_offset": 2048, 00:14:52.046 "data_size": 63488 00:14:52.046 } 00:14:52.046 ] 00:14:52.046 }' 00:14:52.046 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:52.046 21:14:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:52.304 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.304 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:14:52.563 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:14:52.563 21:14:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:14:52.822 [2024-07-14 21:14:04.115621] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:14:52.822 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring concat 64 4 00:14:52.822 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:52.822 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:14:52.822 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:52.822 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:52.822 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:52.822 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:52.822 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:52.822 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:52.822 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:52.822 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:52.822 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:52.822 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:52.822 "name": "Existed_Raid", 00:14:52.822 "uuid": "fbee8cdc-4225-11ef-aa83-81fbc7dfef58", 00:14:52.822 "strip_size_kb": 64, 00:14:52.822 "state": "configuring", 00:14:52.822 "raid_level": "concat", 00:14:52.822 "superblock": true, 00:14:52.822 "num_base_bdevs": 4, 00:14:52.822 "num_base_bdevs_discovered": 3, 00:14:52.822 "num_base_bdevs_operational": 4, 00:14:52.822 "base_bdevs_list": [ 00:14:52.822 { 00:14:52.822 "name": null, 00:14:52.822 "uuid": "fcf80ee1-4225-11ef-aa83-81fbc7dfef58", 00:14:52.822 "is_configured": false, 00:14:52.822 "data_offset": 2048, 00:14:52.822 "data_size": 63488 00:14:52.822 }, 00:14:52.822 { 00:14:52.822 "name": "BaseBdev2", 00:14:52.822 "uuid": "faaf5541-4225-11ef-aa83-81fbc7dfef58", 00:14:52.822 "is_configured": true, 00:14:52.822 "data_offset": 2048, 00:14:52.822 "data_size": 63488 00:14:52.822 }, 00:14:52.822 { 00:14:52.822 "name": "BaseBdev3", 00:14:52.822 "uuid": "fb0d5364-4225-11ef-aa83-81fbc7dfef58", 00:14:52.822 "is_configured": true, 00:14:52.822 "data_offset": 2048, 00:14:52.822 "data_size": 63488 00:14:52.822 }, 00:14:52.822 { 00:14:52.822 "name": "BaseBdev4", 00:14:52.822 "uuid": "fb87659b-4225-11ef-aa83-81fbc7dfef58", 00:14:52.822 "is_configured": true, 00:14:52.822 "data_offset": 2048, 00:14:52.822 "data_size": 63488 00:14:52.822 } 00:14:52.822 ] 00:14:52.822 }' 00:14:52.822 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:52.822 21:14:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:53.391 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.391 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:14:53.391 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:14:53.391 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:14:53.391 21:14:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:53.650 21:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u fcf80ee1-4225-11ef-aa83-81fbc7dfef58 00:14:53.908 [2024-07-14 21:14:05.407750] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:14:53.908 [2024-07-14 21:14:05.407798] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x37f24e234f00 00:14:53.908 [2024-07-14 21:14:05.407802] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:14:53.908 [2024-07-14 21:14:05.407820] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x37f24e297e20 00:14:53.908 [2024-07-14 21:14:05.407868] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x37f24e234f00 00:14:53.908 [2024-07-14 21:14:05.407876] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x37f24e234f00 00:14:53.908 [2024-07-14 21:14:05.407896] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:14:53.908 NewBaseBdev 00:14:53.908 21:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:14:53.908 21:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:14:53.909 21:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:14:53.909 21:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:14:53.909 21:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:14:53.909 21:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:14:53.909 21:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:14:54.167 21:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:14:54.426 [ 00:14:54.427 { 00:14:54.427 "name": "NewBaseBdev", 00:14:54.427 "aliases": [ 00:14:54.427 "fcf80ee1-4225-11ef-aa83-81fbc7dfef58" 00:14:54.427 ], 00:14:54.427 "product_name": "Malloc disk", 00:14:54.427 "block_size": 512, 00:14:54.427 "num_blocks": 65536, 00:14:54.427 "uuid": "fcf80ee1-4225-11ef-aa83-81fbc7dfef58", 00:14:54.427 "assigned_rate_limits": { 00:14:54.427 "rw_ios_per_sec": 0, 00:14:54.427 "rw_mbytes_per_sec": 0, 00:14:54.427 "r_mbytes_per_sec": 0, 00:14:54.427 "w_mbytes_per_sec": 0 00:14:54.427 }, 00:14:54.427 "claimed": true, 00:14:54.427 "claim_type": "exclusive_write", 00:14:54.427 "zoned": false, 00:14:54.427 "supported_io_types": { 00:14:54.427 "read": true, 00:14:54.427 "write": true, 00:14:54.427 "unmap": true, 00:14:54.427 "flush": true, 00:14:54.427 "reset": true, 00:14:54.427 "nvme_admin": false, 00:14:54.427 "nvme_io": false, 00:14:54.427 "nvme_io_md": false, 00:14:54.427 "write_zeroes": true, 00:14:54.427 "zcopy": true, 00:14:54.427 "get_zone_info": false, 00:14:54.427 "zone_management": false, 00:14:54.427 "zone_append": false, 00:14:54.427 "compare": false, 00:14:54.427 "compare_and_write": false, 00:14:54.427 "abort": true, 00:14:54.427 "seek_hole": false, 00:14:54.427 "seek_data": false, 00:14:54.427 "copy": true, 00:14:54.427 "nvme_iov_md": false 00:14:54.427 }, 00:14:54.427 "memory_domains": [ 00:14:54.427 { 00:14:54.427 "dma_device_id": "system", 00:14:54.427 "dma_device_type": 1 00:14:54.427 }, 00:14:54.427 { 00:14:54.427 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:54.427 "dma_device_type": 2 00:14:54.427 } 00:14:54.427 ], 00:14:54.427 "driver_specific": {} 00:14:54.427 } 00:14:54.427 ] 00:14:54.427 21:14:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:14:54.427 21:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online concat 64 4 00:14:54.427 21:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:14:54.427 21:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:14:54.427 21:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:14:54.427 21:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:14:54.427 21:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:14:54.427 21:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:14:54.427 21:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:14:54.427 21:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:14:54.427 21:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:14:54.427 21:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:14:54.427 21:14:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:14:54.686 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:14:54.686 "name": "Existed_Raid", 00:14:54.686 "uuid": "fbee8cdc-4225-11ef-aa83-81fbc7dfef58", 00:14:54.686 "strip_size_kb": 64, 00:14:54.686 "state": "online", 00:14:54.686 "raid_level": "concat", 00:14:54.686 "superblock": true, 00:14:54.686 "num_base_bdevs": 4, 00:14:54.686 "num_base_bdevs_discovered": 4, 00:14:54.686 "num_base_bdevs_operational": 4, 00:14:54.686 "base_bdevs_list": [ 00:14:54.686 { 00:14:54.686 "name": "NewBaseBdev", 00:14:54.686 "uuid": "fcf80ee1-4225-11ef-aa83-81fbc7dfef58", 00:14:54.686 "is_configured": true, 00:14:54.686 "data_offset": 2048, 00:14:54.686 "data_size": 63488 00:14:54.686 }, 00:14:54.686 { 00:14:54.686 "name": "BaseBdev2", 00:14:54.686 "uuid": "faaf5541-4225-11ef-aa83-81fbc7dfef58", 00:14:54.686 "is_configured": true, 00:14:54.686 "data_offset": 2048, 00:14:54.686 "data_size": 63488 00:14:54.686 }, 00:14:54.686 { 00:14:54.686 "name": "BaseBdev3", 00:14:54.686 "uuid": "fb0d5364-4225-11ef-aa83-81fbc7dfef58", 00:14:54.686 "is_configured": true, 00:14:54.686 "data_offset": 2048, 00:14:54.686 "data_size": 63488 00:14:54.686 }, 00:14:54.686 { 00:14:54.686 "name": "BaseBdev4", 00:14:54.686 "uuid": "fb87659b-4225-11ef-aa83-81fbc7dfef58", 00:14:54.686 "is_configured": true, 00:14:54.686 "data_offset": 2048, 00:14:54.686 "data_size": 63488 00:14:54.686 } 00:14:54.686 ] 00:14:54.686 }' 00:14:54.686 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:14:54.686 21:14:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:54.945 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:14:54.945 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:14:54.945 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:14:54.945 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:14:54.945 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:14:54.945 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:14:54.945 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:14:54.945 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:14:55.204 [2024-07-14 21:14:06.583668] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:14:55.204 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:14:55.204 "name": "Existed_Raid", 00:14:55.204 "aliases": [ 00:14:55.204 "fbee8cdc-4225-11ef-aa83-81fbc7dfef58" 00:14:55.204 ], 00:14:55.204 "product_name": "Raid Volume", 00:14:55.204 "block_size": 512, 00:14:55.204 "num_blocks": 253952, 00:14:55.204 "uuid": "fbee8cdc-4225-11ef-aa83-81fbc7dfef58", 00:14:55.204 "assigned_rate_limits": { 00:14:55.204 "rw_ios_per_sec": 0, 00:14:55.204 "rw_mbytes_per_sec": 0, 00:14:55.204 "r_mbytes_per_sec": 0, 00:14:55.204 "w_mbytes_per_sec": 0 00:14:55.204 }, 00:14:55.204 "claimed": false, 00:14:55.204 "zoned": false, 00:14:55.204 "supported_io_types": { 00:14:55.204 "read": true, 00:14:55.204 "write": true, 00:14:55.204 "unmap": true, 00:14:55.204 "flush": true, 00:14:55.204 "reset": true, 00:14:55.204 "nvme_admin": false, 00:14:55.204 "nvme_io": false, 00:14:55.204 "nvme_io_md": false, 00:14:55.204 "write_zeroes": true, 00:14:55.204 "zcopy": false, 00:14:55.204 "get_zone_info": false, 00:14:55.204 "zone_management": false, 00:14:55.204 "zone_append": false, 00:14:55.204 "compare": false, 00:14:55.204 "compare_and_write": false, 00:14:55.204 "abort": false, 00:14:55.204 "seek_hole": false, 00:14:55.204 "seek_data": false, 00:14:55.204 "copy": false, 00:14:55.204 "nvme_iov_md": false 00:14:55.204 }, 00:14:55.204 "memory_domains": [ 00:14:55.204 { 00:14:55.204 "dma_device_id": "system", 00:14:55.204 "dma_device_type": 1 00:14:55.204 }, 00:14:55.204 { 00:14:55.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.204 "dma_device_type": 2 00:14:55.204 }, 00:14:55.204 { 00:14:55.204 "dma_device_id": "system", 00:14:55.204 "dma_device_type": 1 00:14:55.204 }, 00:14:55.204 { 00:14:55.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.204 "dma_device_type": 2 00:14:55.204 }, 00:14:55.204 { 00:14:55.204 "dma_device_id": "system", 00:14:55.204 "dma_device_type": 1 00:14:55.204 }, 00:14:55.204 { 00:14:55.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.204 "dma_device_type": 2 00:14:55.204 }, 00:14:55.204 { 00:14:55.204 "dma_device_id": "system", 00:14:55.204 "dma_device_type": 1 00:14:55.204 }, 00:14:55.204 { 00:14:55.204 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.204 "dma_device_type": 2 00:14:55.204 } 00:14:55.204 ], 00:14:55.204 "driver_specific": { 00:14:55.204 "raid": { 00:14:55.204 "uuid": "fbee8cdc-4225-11ef-aa83-81fbc7dfef58", 00:14:55.204 "strip_size_kb": 64, 00:14:55.204 "state": "online", 00:14:55.204 "raid_level": "concat", 00:14:55.204 "superblock": true, 00:14:55.204 "num_base_bdevs": 4, 00:14:55.204 "num_base_bdevs_discovered": 4, 00:14:55.204 "num_base_bdevs_operational": 4, 00:14:55.204 "base_bdevs_list": [ 00:14:55.204 { 00:14:55.204 "name": "NewBaseBdev", 00:14:55.204 "uuid": "fcf80ee1-4225-11ef-aa83-81fbc7dfef58", 00:14:55.204 "is_configured": true, 00:14:55.204 "data_offset": 2048, 00:14:55.204 "data_size": 63488 00:14:55.204 }, 00:14:55.204 { 00:14:55.204 "name": "BaseBdev2", 00:14:55.204 "uuid": "faaf5541-4225-11ef-aa83-81fbc7dfef58", 00:14:55.204 "is_configured": true, 00:14:55.204 "data_offset": 2048, 00:14:55.204 "data_size": 63488 00:14:55.204 }, 00:14:55.204 { 00:14:55.204 "name": "BaseBdev3", 00:14:55.204 "uuid": "fb0d5364-4225-11ef-aa83-81fbc7dfef58", 00:14:55.204 "is_configured": true, 00:14:55.204 "data_offset": 2048, 00:14:55.204 "data_size": 63488 00:14:55.204 }, 00:14:55.204 { 00:14:55.204 "name": "BaseBdev4", 00:14:55.205 "uuid": "fb87659b-4225-11ef-aa83-81fbc7dfef58", 00:14:55.205 "is_configured": true, 00:14:55.205 "data_offset": 2048, 00:14:55.205 "data_size": 63488 00:14:55.205 } 00:14:55.205 ] 00:14:55.205 } 00:14:55.205 } 00:14:55.205 }' 00:14:55.205 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:14:55.205 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:14:55.205 BaseBdev2 00:14:55.205 BaseBdev3 00:14:55.205 BaseBdev4' 00:14:55.205 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:55.205 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:55.205 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:14:55.463 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:55.463 "name": "NewBaseBdev", 00:14:55.463 "aliases": [ 00:14:55.463 "fcf80ee1-4225-11ef-aa83-81fbc7dfef58" 00:14:55.463 ], 00:14:55.463 "product_name": "Malloc disk", 00:14:55.463 "block_size": 512, 00:14:55.463 "num_blocks": 65536, 00:14:55.463 "uuid": "fcf80ee1-4225-11ef-aa83-81fbc7dfef58", 00:14:55.463 "assigned_rate_limits": { 00:14:55.463 "rw_ios_per_sec": 0, 00:14:55.463 "rw_mbytes_per_sec": 0, 00:14:55.463 "r_mbytes_per_sec": 0, 00:14:55.463 "w_mbytes_per_sec": 0 00:14:55.463 }, 00:14:55.463 "claimed": true, 00:14:55.463 "claim_type": "exclusive_write", 00:14:55.463 "zoned": false, 00:14:55.463 "supported_io_types": { 00:14:55.463 "read": true, 00:14:55.463 "write": true, 00:14:55.463 "unmap": true, 00:14:55.463 "flush": true, 00:14:55.463 "reset": true, 00:14:55.463 "nvme_admin": false, 00:14:55.463 "nvme_io": false, 00:14:55.463 "nvme_io_md": false, 00:14:55.463 "write_zeroes": true, 00:14:55.463 "zcopy": true, 00:14:55.463 "get_zone_info": false, 00:14:55.463 "zone_management": false, 00:14:55.463 "zone_append": false, 00:14:55.463 "compare": false, 00:14:55.463 "compare_and_write": false, 00:14:55.463 "abort": true, 00:14:55.463 "seek_hole": false, 00:14:55.463 "seek_data": false, 00:14:55.463 "copy": true, 00:14:55.463 "nvme_iov_md": false 00:14:55.463 }, 00:14:55.463 "memory_domains": [ 00:14:55.463 { 00:14:55.463 "dma_device_id": "system", 00:14:55.463 "dma_device_type": 1 00:14:55.463 }, 00:14:55.463 { 00:14:55.463 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.463 "dma_device_type": 2 00:14:55.463 } 00:14:55.463 ], 00:14:55.463 "driver_specific": {} 00:14:55.463 }' 00:14:55.463 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:55.463 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:55.463 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:55.463 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:55.463 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:55.463 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:55.464 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:55.464 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:55.464 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:55.464 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:55.464 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:55.464 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:55.464 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:55.464 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:55.464 21:14:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:55.721 "name": "BaseBdev2", 00:14:55.721 "aliases": [ 00:14:55.721 "faaf5541-4225-11ef-aa83-81fbc7dfef58" 00:14:55.721 ], 00:14:55.721 "product_name": "Malloc disk", 00:14:55.721 "block_size": 512, 00:14:55.721 "num_blocks": 65536, 00:14:55.721 "uuid": "faaf5541-4225-11ef-aa83-81fbc7dfef58", 00:14:55.721 "assigned_rate_limits": { 00:14:55.721 "rw_ios_per_sec": 0, 00:14:55.721 "rw_mbytes_per_sec": 0, 00:14:55.721 "r_mbytes_per_sec": 0, 00:14:55.721 "w_mbytes_per_sec": 0 00:14:55.721 }, 00:14:55.721 "claimed": true, 00:14:55.721 "claim_type": "exclusive_write", 00:14:55.721 "zoned": false, 00:14:55.721 "supported_io_types": { 00:14:55.721 "read": true, 00:14:55.721 "write": true, 00:14:55.721 "unmap": true, 00:14:55.721 "flush": true, 00:14:55.721 "reset": true, 00:14:55.721 "nvme_admin": false, 00:14:55.721 "nvme_io": false, 00:14:55.721 "nvme_io_md": false, 00:14:55.721 "write_zeroes": true, 00:14:55.721 "zcopy": true, 00:14:55.721 "get_zone_info": false, 00:14:55.721 "zone_management": false, 00:14:55.721 "zone_append": false, 00:14:55.721 "compare": false, 00:14:55.721 "compare_and_write": false, 00:14:55.721 "abort": true, 00:14:55.721 "seek_hole": false, 00:14:55.721 "seek_data": false, 00:14:55.721 "copy": true, 00:14:55.721 "nvme_iov_md": false 00:14:55.721 }, 00:14:55.721 "memory_domains": [ 00:14:55.721 { 00:14:55.721 "dma_device_id": "system", 00:14:55.721 "dma_device_type": 1 00:14:55.721 }, 00:14:55.721 { 00:14:55.721 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.721 "dma_device_type": 2 00:14:55.721 } 00:14:55.721 ], 00:14:55.721 "driver_specific": {} 00:14:55.721 }' 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:14:55.721 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:55.979 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:55.979 "name": "BaseBdev3", 00:14:55.979 "aliases": [ 00:14:55.979 "fb0d5364-4225-11ef-aa83-81fbc7dfef58" 00:14:55.979 ], 00:14:55.979 "product_name": "Malloc disk", 00:14:55.979 "block_size": 512, 00:14:55.979 "num_blocks": 65536, 00:14:55.979 "uuid": "fb0d5364-4225-11ef-aa83-81fbc7dfef58", 00:14:55.979 "assigned_rate_limits": { 00:14:55.979 "rw_ios_per_sec": 0, 00:14:55.979 "rw_mbytes_per_sec": 0, 00:14:55.979 "r_mbytes_per_sec": 0, 00:14:55.979 "w_mbytes_per_sec": 0 00:14:55.979 }, 00:14:55.979 "claimed": true, 00:14:55.979 "claim_type": "exclusive_write", 00:14:55.979 "zoned": false, 00:14:55.979 "supported_io_types": { 00:14:55.979 "read": true, 00:14:55.979 "write": true, 00:14:55.979 "unmap": true, 00:14:55.979 "flush": true, 00:14:55.979 "reset": true, 00:14:55.979 "nvme_admin": false, 00:14:55.979 "nvme_io": false, 00:14:55.979 "nvme_io_md": false, 00:14:55.979 "write_zeroes": true, 00:14:55.979 "zcopy": true, 00:14:55.979 "get_zone_info": false, 00:14:55.979 "zone_management": false, 00:14:55.979 "zone_append": false, 00:14:55.979 "compare": false, 00:14:55.979 "compare_and_write": false, 00:14:55.979 "abort": true, 00:14:55.979 "seek_hole": false, 00:14:55.979 "seek_data": false, 00:14:55.979 "copy": true, 00:14:55.979 "nvme_iov_md": false 00:14:55.979 }, 00:14:55.979 "memory_domains": [ 00:14:55.979 { 00:14:55.979 "dma_device_id": "system", 00:14:55.979 "dma_device_type": 1 00:14:55.979 }, 00:14:55.979 { 00:14:55.979 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:55.979 "dma_device_type": 2 00:14:55.979 } 00:14:55.979 ], 00:14:55.979 "driver_specific": {} 00:14:55.979 }' 00:14:55.979 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:55.979 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:55.979 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:55.979 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:55.979 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:55.979 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:55.979 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:55.979 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:55.979 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:55.979 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:55.979 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:56.237 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:56.237 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:14:56.237 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:14:56.237 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:14:56.496 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:14:56.496 "name": "BaseBdev4", 00:14:56.496 "aliases": [ 00:14:56.496 "fb87659b-4225-11ef-aa83-81fbc7dfef58" 00:14:56.496 ], 00:14:56.496 "product_name": "Malloc disk", 00:14:56.496 "block_size": 512, 00:14:56.496 "num_blocks": 65536, 00:14:56.496 "uuid": "fb87659b-4225-11ef-aa83-81fbc7dfef58", 00:14:56.496 "assigned_rate_limits": { 00:14:56.496 "rw_ios_per_sec": 0, 00:14:56.496 "rw_mbytes_per_sec": 0, 00:14:56.496 "r_mbytes_per_sec": 0, 00:14:56.496 "w_mbytes_per_sec": 0 00:14:56.496 }, 00:14:56.496 "claimed": true, 00:14:56.496 "claim_type": "exclusive_write", 00:14:56.496 "zoned": false, 00:14:56.496 "supported_io_types": { 00:14:56.496 "read": true, 00:14:56.496 "write": true, 00:14:56.496 "unmap": true, 00:14:56.496 "flush": true, 00:14:56.496 "reset": true, 00:14:56.496 "nvme_admin": false, 00:14:56.496 "nvme_io": false, 00:14:56.496 "nvme_io_md": false, 00:14:56.496 "write_zeroes": true, 00:14:56.496 "zcopy": true, 00:14:56.496 "get_zone_info": false, 00:14:56.496 "zone_management": false, 00:14:56.496 "zone_append": false, 00:14:56.496 "compare": false, 00:14:56.496 "compare_and_write": false, 00:14:56.496 "abort": true, 00:14:56.496 "seek_hole": false, 00:14:56.496 "seek_data": false, 00:14:56.496 "copy": true, 00:14:56.496 "nvme_iov_md": false 00:14:56.496 }, 00:14:56.496 "memory_domains": [ 00:14:56.496 { 00:14:56.496 "dma_device_id": "system", 00:14:56.496 "dma_device_type": 1 00:14:56.496 }, 00:14:56.496 { 00:14:56.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:14:56.496 "dma_device_type": 2 00:14:56.496 } 00:14:56.496 ], 00:14:56.496 "driver_specific": {} 00:14:56.496 }' 00:14:56.496 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.496 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:14:56.496 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:14:56.496 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.496 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:14:56.496 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:14:56.496 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.496 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:14:56.496 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:14:56.496 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:56.496 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:14:56.496 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:14:56.496 21:14:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:14:56.754 [2024-07-14 21:14:08.191655] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:14:56.754 [2024-07-14 21:14:08.191676] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:14:56.754 [2024-07-14 21:14:08.191704] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:14:56.754 [2024-07-14 21:14:08.191720] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:14:56.754 [2024-07-14 21:14:08.191724] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x37f24e234f00 name Existed_Raid, state offline 00:14:56.754 21:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 61397 00:14:56.754 21:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 61397 ']' 00:14:56.754 21:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 61397 00:14:56.754 21:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:14:56.755 21:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:14:56.755 21:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 61397 00:14:56.755 21:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:14:56.755 21:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:14:56.755 killing process with pid 61397 00:14:56.755 21:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:14:56.755 21:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 61397' 00:14:56.755 21:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 61397 00:14:56.755 [2024-07-14 21:14:08.222985] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:14:56.755 21:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 61397 00:14:56.755 [2024-07-14 21:14:08.257984] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:14:57.014 21:14:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:14:57.014 00:14:57.014 real 0m24.734s 00:14:57.014 user 0m44.813s 00:14:57.014 sys 0m3.744s 00:14:57.014 21:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:57.014 ************************************ 00:14:57.014 END TEST raid_state_function_test_sb 00:14:57.014 ************************************ 00:14:57.014 21:14:08 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:14:57.014 21:14:08 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:14:57.014 21:14:08 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test concat 4 00:14:57.014 21:14:08 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:14:57.014 21:14:08 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:57.014 21:14:08 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:14:57.014 ************************************ 00:14:57.014 START TEST raid_superblock_test 00:14:57.014 ************************************ 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test concat 4 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=concat 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' concat '!=' raid1 ']' 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@404 -- # strip_size=64 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@405 -- # strip_size_create_arg='-z 64' 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=62207 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 62207 /var/tmp/spdk-raid.sock 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 62207 ']' 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:57.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:57.014 21:14:08 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:14:57.014 [2024-07-14 21:14:08.545536] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:14:57.014 [2024-07-14 21:14:08.545776] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:14:57.589 EAL: TSC is not safe to use in SMP mode 00:14:57.589 EAL: TSC is not invariant 00:14:57.589 [2024-07-14 21:14:09.074353] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:57.872 [2024-07-14 21:14:09.162290] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:14:57.872 [2024-07-14 21:14:09.164791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.872 [2024-07-14 21:14:09.165671] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:57.872 [2024-07-14 21:14:09.165686] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:14:58.130 21:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:58.130 21:14:09 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:14:58.130 21:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:14:58.130 21:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:58.130 21:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:14:58.130 21:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:14:58.130 21:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:14:58.130 21:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:58.130 21:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:58.130 21:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:58.130 21:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:14:58.389 malloc1 00:14:58.389 21:14:09 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:14:58.648 [2024-07-14 21:14:10.078169] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:14:58.648 [2024-07-14 21:14:10.078241] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:58.648 [2024-07-14 21:14:10.078252] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e8e6aa34780 00:14:58.648 [2024-07-14 21:14:10.078260] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:58.648 [2024-07-14 21:14:10.079267] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:58.648 [2024-07-14 21:14:10.079307] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:14:58.648 pt1 00:14:58.648 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:58.648 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:58.648 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:14:58.648 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:14:58.648 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:14:58.648 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:58.648 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:58.648 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:58.648 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:14:58.906 malloc2 00:14:58.906 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:14:59.165 [2024-07-14 21:14:10.594199] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:14:59.165 [2024-07-14 21:14:10.594278] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.165 [2024-07-14 21:14:10.594290] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e8e6aa34c80 00:14:59.165 [2024-07-14 21:14:10.594297] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.165 [2024-07-14 21:14:10.595024] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.165 [2024-07-14 21:14:10.595050] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:14:59.165 pt2 00:14:59.165 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:59.165 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:59.165 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:14:59.165 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:14:59.165 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:14:59.165 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.165 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.165 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.165 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:14:59.424 malloc3 00:14:59.424 21:14:10 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:14:59.683 [2024-07-14 21:14:11.078237] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:14:59.683 [2024-07-14 21:14:11.078300] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:14:59.683 [2024-07-14 21:14:11.078328] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e8e6aa35180 00:14:59.683 [2024-07-14 21:14:11.078335] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:14:59.683 [2024-07-14 21:14:11.079036] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:14:59.683 [2024-07-14 21:14:11.079060] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:14:59.683 pt3 00:14:59.683 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:14:59.683 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:14:59.683 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:14:59.683 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:14:59.683 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:14:59.683 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:14:59.683 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:14:59.683 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:14:59.683 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:14:59.942 malloc4 00:14:59.942 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:00.200 [2024-07-14 21:14:11.502246] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:00.200 [2024-07-14 21:14:11.502306] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:00.200 [2024-07-14 21:14:11.502334] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e8e6aa35680 00:15:00.200 [2024-07-14 21:14:11.502341] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:00.200 [2024-07-14 21:14:11.503064] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:00.200 [2024-07-14 21:14:11.503088] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:00.200 pt4 00:15:00.200 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:15:00.200 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:15:00.200 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:15:00.459 [2024-07-14 21:14:11.782275] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:00.459 [2024-07-14 21:14:11.782909] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:00.459 [2024-07-14 21:14:11.782932] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:00.459 [2024-07-14 21:14:11.782943] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:00.459 [2024-07-14 21:14:11.782995] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e8e6aa35900 00:15:00.459 [2024-07-14 21:14:11.783002] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:00.459 [2024-07-14 21:14:11.783048] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e8e6aa97e20 00:15:00.459 [2024-07-14 21:14:11.783157] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e8e6aa35900 00:15:00.459 [2024-07-14 21:14:11.783162] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e8e6aa35900 00:15:00.459 [2024-07-14 21:14:11.783190] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:00.459 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:00.459 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:00.459 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:00.459 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:00.459 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:00.459 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:00.459 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:00.459 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:00.459 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:00.459 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:00.459 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:00.459 21:14:11 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:00.718 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:00.718 "name": "raid_bdev1", 00:15:00.718 "uuid": "0429f0dc-4226-11ef-aa83-81fbc7dfef58", 00:15:00.718 "strip_size_kb": 64, 00:15:00.718 "state": "online", 00:15:00.718 "raid_level": "concat", 00:15:00.718 "superblock": true, 00:15:00.718 "num_base_bdevs": 4, 00:15:00.718 "num_base_bdevs_discovered": 4, 00:15:00.718 "num_base_bdevs_operational": 4, 00:15:00.718 "base_bdevs_list": [ 00:15:00.718 { 00:15:00.718 "name": "pt1", 00:15:00.718 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:00.718 "is_configured": true, 00:15:00.718 "data_offset": 2048, 00:15:00.718 "data_size": 63488 00:15:00.718 }, 00:15:00.718 { 00:15:00.718 "name": "pt2", 00:15:00.718 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:00.718 "is_configured": true, 00:15:00.718 "data_offset": 2048, 00:15:00.718 "data_size": 63488 00:15:00.718 }, 00:15:00.718 { 00:15:00.718 "name": "pt3", 00:15:00.718 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:00.718 "is_configured": true, 00:15:00.718 "data_offset": 2048, 00:15:00.718 "data_size": 63488 00:15:00.718 }, 00:15:00.718 { 00:15:00.718 "name": "pt4", 00:15:00.718 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:00.718 "is_configured": true, 00:15:00.718 "data_offset": 2048, 00:15:00.718 "data_size": 63488 00:15:00.718 } 00:15:00.718 ] 00:15:00.718 }' 00:15:00.718 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:00.718 21:14:12 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:00.977 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:15:00.977 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:00.977 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:00.977 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:00.977 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:00.977 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:00.977 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:00.977 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:01.237 [2024-07-14 21:14:12.590316] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:01.237 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:01.237 "name": "raid_bdev1", 00:15:01.237 "aliases": [ 00:15:01.237 "0429f0dc-4226-11ef-aa83-81fbc7dfef58" 00:15:01.237 ], 00:15:01.237 "product_name": "Raid Volume", 00:15:01.237 "block_size": 512, 00:15:01.237 "num_blocks": 253952, 00:15:01.237 "uuid": "0429f0dc-4226-11ef-aa83-81fbc7dfef58", 00:15:01.237 "assigned_rate_limits": { 00:15:01.237 "rw_ios_per_sec": 0, 00:15:01.237 "rw_mbytes_per_sec": 0, 00:15:01.237 "r_mbytes_per_sec": 0, 00:15:01.237 "w_mbytes_per_sec": 0 00:15:01.237 }, 00:15:01.237 "claimed": false, 00:15:01.237 "zoned": false, 00:15:01.237 "supported_io_types": { 00:15:01.237 "read": true, 00:15:01.237 "write": true, 00:15:01.237 "unmap": true, 00:15:01.237 "flush": true, 00:15:01.237 "reset": true, 00:15:01.237 "nvme_admin": false, 00:15:01.237 "nvme_io": false, 00:15:01.237 "nvme_io_md": false, 00:15:01.237 "write_zeroes": true, 00:15:01.237 "zcopy": false, 00:15:01.237 "get_zone_info": false, 00:15:01.237 "zone_management": false, 00:15:01.237 "zone_append": false, 00:15:01.237 "compare": false, 00:15:01.237 "compare_and_write": false, 00:15:01.237 "abort": false, 00:15:01.237 "seek_hole": false, 00:15:01.237 "seek_data": false, 00:15:01.237 "copy": false, 00:15:01.237 "nvme_iov_md": false 00:15:01.237 }, 00:15:01.237 "memory_domains": [ 00:15:01.237 { 00:15:01.237 "dma_device_id": "system", 00:15:01.237 "dma_device_type": 1 00:15:01.237 }, 00:15:01.237 { 00:15:01.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.237 "dma_device_type": 2 00:15:01.237 }, 00:15:01.237 { 00:15:01.237 "dma_device_id": "system", 00:15:01.237 "dma_device_type": 1 00:15:01.237 }, 00:15:01.237 { 00:15:01.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.237 "dma_device_type": 2 00:15:01.237 }, 00:15:01.237 { 00:15:01.237 "dma_device_id": "system", 00:15:01.237 "dma_device_type": 1 00:15:01.237 }, 00:15:01.237 { 00:15:01.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.237 "dma_device_type": 2 00:15:01.237 }, 00:15:01.237 { 00:15:01.237 "dma_device_id": "system", 00:15:01.237 "dma_device_type": 1 00:15:01.237 }, 00:15:01.237 { 00:15:01.237 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.237 "dma_device_type": 2 00:15:01.237 } 00:15:01.237 ], 00:15:01.237 "driver_specific": { 00:15:01.237 "raid": { 00:15:01.237 "uuid": "0429f0dc-4226-11ef-aa83-81fbc7dfef58", 00:15:01.237 "strip_size_kb": 64, 00:15:01.237 "state": "online", 00:15:01.237 "raid_level": "concat", 00:15:01.237 "superblock": true, 00:15:01.237 "num_base_bdevs": 4, 00:15:01.237 "num_base_bdevs_discovered": 4, 00:15:01.237 "num_base_bdevs_operational": 4, 00:15:01.237 "base_bdevs_list": [ 00:15:01.237 { 00:15:01.237 "name": "pt1", 00:15:01.237 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.237 "is_configured": true, 00:15:01.237 "data_offset": 2048, 00:15:01.237 "data_size": 63488 00:15:01.237 }, 00:15:01.237 { 00:15:01.237 "name": "pt2", 00:15:01.237 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.237 "is_configured": true, 00:15:01.237 "data_offset": 2048, 00:15:01.237 "data_size": 63488 00:15:01.237 }, 00:15:01.237 { 00:15:01.237 "name": "pt3", 00:15:01.237 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:01.237 "is_configured": true, 00:15:01.237 "data_offset": 2048, 00:15:01.237 "data_size": 63488 00:15:01.237 }, 00:15:01.237 { 00:15:01.237 "name": "pt4", 00:15:01.237 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:01.237 "is_configured": true, 00:15:01.237 "data_offset": 2048, 00:15:01.237 "data_size": 63488 00:15:01.237 } 00:15:01.237 ] 00:15:01.237 } 00:15:01.237 } 00:15:01.237 }' 00:15:01.237 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:01.237 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:01.237 pt2 00:15:01.237 pt3 00:15:01.237 pt4' 00:15:01.237 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:01.237 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:01.237 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:01.496 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:01.496 "name": "pt1", 00:15:01.496 "aliases": [ 00:15:01.496 "00000000-0000-0000-0000-000000000001" 00:15:01.496 ], 00:15:01.496 "product_name": "passthru", 00:15:01.496 "block_size": 512, 00:15:01.496 "num_blocks": 65536, 00:15:01.496 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:01.496 "assigned_rate_limits": { 00:15:01.496 "rw_ios_per_sec": 0, 00:15:01.496 "rw_mbytes_per_sec": 0, 00:15:01.496 "r_mbytes_per_sec": 0, 00:15:01.496 "w_mbytes_per_sec": 0 00:15:01.496 }, 00:15:01.496 "claimed": true, 00:15:01.496 "claim_type": "exclusive_write", 00:15:01.496 "zoned": false, 00:15:01.496 "supported_io_types": { 00:15:01.496 "read": true, 00:15:01.496 "write": true, 00:15:01.496 "unmap": true, 00:15:01.496 "flush": true, 00:15:01.496 "reset": true, 00:15:01.496 "nvme_admin": false, 00:15:01.496 "nvme_io": false, 00:15:01.496 "nvme_io_md": false, 00:15:01.496 "write_zeroes": true, 00:15:01.496 "zcopy": true, 00:15:01.496 "get_zone_info": false, 00:15:01.496 "zone_management": false, 00:15:01.496 "zone_append": false, 00:15:01.496 "compare": false, 00:15:01.496 "compare_and_write": false, 00:15:01.496 "abort": true, 00:15:01.496 "seek_hole": false, 00:15:01.496 "seek_data": false, 00:15:01.496 "copy": true, 00:15:01.496 "nvme_iov_md": false 00:15:01.496 }, 00:15:01.496 "memory_domains": [ 00:15:01.496 { 00:15:01.496 "dma_device_id": "system", 00:15:01.496 "dma_device_type": 1 00:15:01.496 }, 00:15:01.496 { 00:15:01.496 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.497 "dma_device_type": 2 00:15:01.497 } 00:15:01.497 ], 00:15:01.497 "driver_specific": { 00:15:01.497 "passthru": { 00:15:01.497 "name": "pt1", 00:15:01.497 "base_bdev_name": "malloc1" 00:15:01.497 } 00:15:01.497 } 00:15:01.497 }' 00:15:01.497 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.497 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.497 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:01.497 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.497 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.497 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:01.497 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.497 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.497 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:01.497 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.497 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.497 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:01.497 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:01.497 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:01.497 21:14:12 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:01.756 "name": "pt2", 00:15:01.756 "aliases": [ 00:15:01.756 "00000000-0000-0000-0000-000000000002" 00:15:01.756 ], 00:15:01.756 "product_name": "passthru", 00:15:01.756 "block_size": 512, 00:15:01.756 "num_blocks": 65536, 00:15:01.756 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:01.756 "assigned_rate_limits": { 00:15:01.756 "rw_ios_per_sec": 0, 00:15:01.756 "rw_mbytes_per_sec": 0, 00:15:01.756 "r_mbytes_per_sec": 0, 00:15:01.756 "w_mbytes_per_sec": 0 00:15:01.756 }, 00:15:01.756 "claimed": true, 00:15:01.756 "claim_type": "exclusive_write", 00:15:01.756 "zoned": false, 00:15:01.756 "supported_io_types": { 00:15:01.756 "read": true, 00:15:01.756 "write": true, 00:15:01.756 "unmap": true, 00:15:01.756 "flush": true, 00:15:01.756 "reset": true, 00:15:01.756 "nvme_admin": false, 00:15:01.756 "nvme_io": false, 00:15:01.756 "nvme_io_md": false, 00:15:01.756 "write_zeroes": true, 00:15:01.756 "zcopy": true, 00:15:01.756 "get_zone_info": false, 00:15:01.756 "zone_management": false, 00:15:01.756 "zone_append": false, 00:15:01.756 "compare": false, 00:15:01.756 "compare_and_write": false, 00:15:01.756 "abort": true, 00:15:01.756 "seek_hole": false, 00:15:01.756 "seek_data": false, 00:15:01.756 "copy": true, 00:15:01.756 "nvme_iov_md": false 00:15:01.756 }, 00:15:01.756 "memory_domains": [ 00:15:01.756 { 00:15:01.756 "dma_device_id": "system", 00:15:01.756 "dma_device_type": 1 00:15:01.756 }, 00:15:01.756 { 00:15:01.756 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:01.756 "dma_device_type": 2 00:15:01.756 } 00:15:01.756 ], 00:15:01.756 "driver_specific": { 00:15:01.756 "passthru": { 00:15:01.756 "name": "pt2", 00:15:01.756 "base_bdev_name": "malloc2" 00:15:01.756 } 00:15:01.756 } 00:15:01.756 }' 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:01.756 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:02.014 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:02.014 "name": "pt3", 00:15:02.014 "aliases": [ 00:15:02.014 "00000000-0000-0000-0000-000000000003" 00:15:02.014 ], 00:15:02.014 "product_name": "passthru", 00:15:02.014 "block_size": 512, 00:15:02.014 "num_blocks": 65536, 00:15:02.014 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:02.014 "assigned_rate_limits": { 00:15:02.014 "rw_ios_per_sec": 0, 00:15:02.014 "rw_mbytes_per_sec": 0, 00:15:02.014 "r_mbytes_per_sec": 0, 00:15:02.014 "w_mbytes_per_sec": 0 00:15:02.014 }, 00:15:02.014 "claimed": true, 00:15:02.014 "claim_type": "exclusive_write", 00:15:02.014 "zoned": false, 00:15:02.014 "supported_io_types": { 00:15:02.014 "read": true, 00:15:02.014 "write": true, 00:15:02.014 "unmap": true, 00:15:02.014 "flush": true, 00:15:02.014 "reset": true, 00:15:02.014 "nvme_admin": false, 00:15:02.014 "nvme_io": false, 00:15:02.014 "nvme_io_md": false, 00:15:02.014 "write_zeroes": true, 00:15:02.014 "zcopy": true, 00:15:02.014 "get_zone_info": false, 00:15:02.014 "zone_management": false, 00:15:02.014 "zone_append": false, 00:15:02.014 "compare": false, 00:15:02.014 "compare_and_write": false, 00:15:02.014 "abort": true, 00:15:02.014 "seek_hole": false, 00:15:02.015 "seek_data": false, 00:15:02.015 "copy": true, 00:15:02.015 "nvme_iov_md": false 00:15:02.015 }, 00:15:02.015 "memory_domains": [ 00:15:02.015 { 00:15:02.015 "dma_device_id": "system", 00:15:02.015 "dma_device_type": 1 00:15:02.015 }, 00:15:02.015 { 00:15:02.015 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.015 "dma_device_type": 2 00:15:02.015 } 00:15:02.015 ], 00:15:02.015 "driver_specific": { 00:15:02.015 "passthru": { 00:15:02.015 "name": "pt3", 00:15:02.015 "base_bdev_name": "malloc3" 00:15:02.015 } 00:15:02.015 } 00:15:02.015 }' 00:15:02.015 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.015 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.015 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:02.015 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.015 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.015 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:02.015 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.015 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.015 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:02.015 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.015 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.015 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:02.015 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:02.015 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:02.015 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:02.285 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:02.285 "name": "pt4", 00:15:02.285 "aliases": [ 00:15:02.285 "00000000-0000-0000-0000-000000000004" 00:15:02.285 ], 00:15:02.285 "product_name": "passthru", 00:15:02.285 "block_size": 512, 00:15:02.285 "num_blocks": 65536, 00:15:02.285 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:02.285 "assigned_rate_limits": { 00:15:02.285 "rw_ios_per_sec": 0, 00:15:02.285 "rw_mbytes_per_sec": 0, 00:15:02.285 "r_mbytes_per_sec": 0, 00:15:02.285 "w_mbytes_per_sec": 0 00:15:02.285 }, 00:15:02.285 "claimed": true, 00:15:02.285 "claim_type": "exclusive_write", 00:15:02.285 "zoned": false, 00:15:02.285 "supported_io_types": { 00:15:02.285 "read": true, 00:15:02.285 "write": true, 00:15:02.285 "unmap": true, 00:15:02.285 "flush": true, 00:15:02.285 "reset": true, 00:15:02.285 "nvme_admin": false, 00:15:02.285 "nvme_io": false, 00:15:02.285 "nvme_io_md": false, 00:15:02.285 "write_zeroes": true, 00:15:02.285 "zcopy": true, 00:15:02.285 "get_zone_info": false, 00:15:02.285 "zone_management": false, 00:15:02.285 "zone_append": false, 00:15:02.285 "compare": false, 00:15:02.285 "compare_and_write": false, 00:15:02.285 "abort": true, 00:15:02.285 "seek_hole": false, 00:15:02.285 "seek_data": false, 00:15:02.285 "copy": true, 00:15:02.285 "nvme_iov_md": false 00:15:02.285 }, 00:15:02.285 "memory_domains": [ 00:15:02.285 { 00:15:02.285 "dma_device_id": "system", 00:15:02.285 "dma_device_type": 1 00:15:02.285 }, 00:15:02.285 { 00:15:02.285 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:02.285 "dma_device_type": 2 00:15:02.285 } 00:15:02.285 ], 00:15:02.285 "driver_specific": { 00:15:02.285 "passthru": { 00:15:02.285 "name": "pt4", 00:15:02.285 "base_bdev_name": "malloc4" 00:15:02.285 } 00:15:02.285 } 00:15:02.285 }' 00:15:02.285 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.285 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:02.285 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:02.285 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.285 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:02.285 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:02.285 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.285 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:02.285 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:02.285 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.285 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:02.285 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:02.285 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:02.285 21:14:13 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:15:02.544 [2024-07-14 21:14:14.014399] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:02.544 21:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=0429f0dc-4226-11ef-aa83-81fbc7dfef58 00:15:02.544 21:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 0429f0dc-4226-11ef-aa83-81fbc7dfef58 ']' 00:15:02.544 21:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:02.803 [2024-07-14 21:14:14.234357] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:02.803 [2024-07-14 21:14:14.234374] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:02.803 [2024-07-14 21:14:14.234412] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:02.803 [2024-07-14 21:14:14.234428] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:02.803 [2024-07-14 21:14:14.234431] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e8e6aa35900 name raid_bdev1, state offline 00:15:02.803 21:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:02.803 21:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:15:03.062 21:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:15:03.062 21:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:15:03.062 21:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:03.062 21:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:15:03.320 21:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:03.320 21:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:03.320 21:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:03.320 21:14:14 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:15:03.579 21:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:15:03.579 21:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:15:03.837 21:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:15:03.837 21:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:15:04.096 21:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:15:04.096 21:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:04.096 21:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:15:04.096 21:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:04.096 21:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:04.096 21:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.096 21:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:04.096 21:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.096 21:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:04.096 21:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:15:04.096 21:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:15:04.096 21:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:15:04.096 21:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:15:04.355 [2024-07-14 21:14:15.806395] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:15:04.355 [2024-07-14 21:14:15.807010] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:15:04.355 [2024-07-14 21:14:15.807028] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:15:04.355 [2024-07-14 21:14:15.807036] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:15:04.355 [2024-07-14 21:14:15.807049] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:15:04.355 [2024-07-14 21:14:15.807102] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:15:04.355 [2024-07-14 21:14:15.807128] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:15:04.355 [2024-07-14 21:14:15.807157] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:15:04.355 [2024-07-14 21:14:15.807165] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:04.355 [2024-07-14 21:14:15.807169] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e8e6aa35680 name raid_bdev1, state configuring 00:15:04.355 request: 00:15:04.355 { 00:15:04.355 "name": "raid_bdev1", 00:15:04.355 "raid_level": "concat", 00:15:04.355 "base_bdevs": [ 00:15:04.355 "malloc1", 00:15:04.355 "malloc2", 00:15:04.355 "malloc3", 00:15:04.355 "malloc4" 00:15:04.355 ], 00:15:04.355 "strip_size_kb": 64, 00:15:04.355 "superblock": false, 00:15:04.355 "method": "bdev_raid_create", 00:15:04.355 "req_id": 1 00:15:04.355 } 00:15:04.355 Got JSON-RPC error response 00:15:04.355 response: 00:15:04.355 { 00:15:04.355 "code": -17, 00:15:04.355 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:15:04.355 } 00:15:04.355 21:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:15:04.355 21:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:15:04.355 21:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:15:04.355 21:14:15 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:15:04.355 21:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.355 21:14:15 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:15:04.614 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:15:04.614 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:15:04.614 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:15:04.873 [2024-07-14 21:14:16.274392] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:15:04.873 [2024-07-14 21:14:16.274448] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:04.873 [2024-07-14 21:14:16.274476] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e8e6aa35180 00:15:04.873 [2024-07-14 21:14:16.274483] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:04.873 [2024-07-14 21:14:16.275277] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:04.873 [2024-07-14 21:14:16.275301] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:15:04.873 [2024-07-14 21:14:16.275340] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:15:04.873 [2024-07-14 21:14:16.275369] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:15:04.873 pt1 00:15:04.873 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:04.873 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:04.873 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:04.873 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:04.873 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:04.873 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:04.873 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:04.873 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:04.873 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:04.873 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:04.873 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:04.873 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:05.131 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:05.132 "name": "raid_bdev1", 00:15:05.132 "uuid": "0429f0dc-4226-11ef-aa83-81fbc7dfef58", 00:15:05.132 "strip_size_kb": 64, 00:15:05.132 "state": "configuring", 00:15:05.132 "raid_level": "concat", 00:15:05.132 "superblock": true, 00:15:05.132 "num_base_bdevs": 4, 00:15:05.132 "num_base_bdevs_discovered": 1, 00:15:05.132 "num_base_bdevs_operational": 4, 00:15:05.132 "base_bdevs_list": [ 00:15:05.132 { 00:15:05.132 "name": "pt1", 00:15:05.132 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:05.132 "is_configured": true, 00:15:05.132 "data_offset": 2048, 00:15:05.132 "data_size": 63488 00:15:05.132 }, 00:15:05.132 { 00:15:05.132 "name": null, 00:15:05.132 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:05.132 "is_configured": false, 00:15:05.132 "data_offset": 2048, 00:15:05.132 "data_size": 63488 00:15:05.132 }, 00:15:05.132 { 00:15:05.132 "name": null, 00:15:05.132 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:05.132 "is_configured": false, 00:15:05.132 "data_offset": 2048, 00:15:05.132 "data_size": 63488 00:15:05.132 }, 00:15:05.132 { 00:15:05.132 "name": null, 00:15:05.132 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:05.132 "is_configured": false, 00:15:05.132 "data_offset": 2048, 00:15:05.132 "data_size": 63488 00:15:05.132 } 00:15:05.132 ] 00:15:05.132 }' 00:15:05.132 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:05.132 21:14:16 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:05.390 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:15:05.390 21:14:16 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:05.647 [2024-07-14 21:14:16.990398] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:05.647 [2024-07-14 21:14:16.990453] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:05.647 [2024-07-14 21:14:16.990480] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e8e6aa34780 00:15:05.647 [2024-07-14 21:14:16.990487] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:05.647 [2024-07-14 21:14:16.990614] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:05.647 [2024-07-14 21:14:16.990625] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:05.647 [2024-07-14 21:14:16.990665] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:05.647 [2024-07-14 21:14:16.990673] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:05.647 pt2 00:15:05.647 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:15:05.904 [2024-07-14 21:14:17.254400] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:15:05.904 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring concat 64 4 00:15:05.904 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:05.904 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:05.904 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:05.904 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:05.904 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:05.904 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:05.904 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:05.904 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:05.904 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:05.904 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:05.904 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:06.162 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:06.162 "name": "raid_bdev1", 00:15:06.162 "uuid": "0429f0dc-4226-11ef-aa83-81fbc7dfef58", 00:15:06.162 "strip_size_kb": 64, 00:15:06.162 "state": "configuring", 00:15:06.162 "raid_level": "concat", 00:15:06.162 "superblock": true, 00:15:06.162 "num_base_bdevs": 4, 00:15:06.162 "num_base_bdevs_discovered": 1, 00:15:06.162 "num_base_bdevs_operational": 4, 00:15:06.162 "base_bdevs_list": [ 00:15:06.162 { 00:15:06.162 "name": "pt1", 00:15:06.162 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:06.162 "is_configured": true, 00:15:06.162 "data_offset": 2048, 00:15:06.162 "data_size": 63488 00:15:06.162 }, 00:15:06.162 { 00:15:06.162 "name": null, 00:15:06.162 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:06.162 "is_configured": false, 00:15:06.162 "data_offset": 2048, 00:15:06.162 "data_size": 63488 00:15:06.162 }, 00:15:06.162 { 00:15:06.162 "name": null, 00:15:06.162 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:06.162 "is_configured": false, 00:15:06.162 "data_offset": 2048, 00:15:06.162 "data_size": 63488 00:15:06.162 }, 00:15:06.162 { 00:15:06.162 "name": null, 00:15:06.162 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:06.162 "is_configured": false, 00:15:06.162 "data_offset": 2048, 00:15:06.162 "data_size": 63488 00:15:06.162 } 00:15:06.162 ] 00:15:06.162 }' 00:15:06.162 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:06.162 21:14:17 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:06.419 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:15:06.419 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:06.419 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:15:06.678 [2024-07-14 21:14:17.978423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:15:06.678 [2024-07-14 21:14:17.978477] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.678 [2024-07-14 21:14:17.978503] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e8e6aa34780 00:15:06.678 [2024-07-14 21:14:17.978510] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.678 [2024-07-14 21:14:17.978619] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.678 [2024-07-14 21:14:17.978630] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:15:06.678 [2024-07-14 21:14:17.978686] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:15:06.678 [2024-07-14 21:14:17.978695] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:15:06.678 pt2 00:15:06.678 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:06.678 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:06.678 21:14:17 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:15:06.936 [2024-07-14 21:14:18.246430] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:15:06.936 [2024-07-14 21:14:18.246490] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:06.936 [2024-07-14 21:14:18.246519] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e8e6aa35b80 00:15:06.936 [2024-07-14 21:14:18.246526] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:06.936 [2024-07-14 21:14:18.246665] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:06.936 [2024-07-14 21:14:18.246682] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:15:06.936 [2024-07-14 21:14:18.246704] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:15:06.936 [2024-07-14 21:14:18.246713] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:15:06.936 pt3 00:15:06.936 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:06.936 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:06.936 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:15:07.195 [2024-07-14 21:14:18.490438] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:15:07.195 [2024-07-14 21:14:18.490481] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:07.195 [2024-07-14 21:14:18.490507] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3e8e6aa35900 00:15:07.195 [2024-07-14 21:14:18.490514] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:07.195 [2024-07-14 21:14:18.490666] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:07.195 [2024-07-14 21:14:18.490678] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:15:07.195 [2024-07-14 21:14:18.490699] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:15:07.195 [2024-07-14 21:14:18.490707] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:15:07.195 [2024-07-14 21:14:18.490737] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3e8e6aa34c80 00:15:07.195 [2024-07-14 21:14:18.490742] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:07.195 [2024-07-14 21:14:18.490762] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3e8e6aa97e20 00:15:07.195 [2024-07-14 21:14:18.490850] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3e8e6aa34c80 00:15:07.195 [2024-07-14 21:14:18.490855] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3e8e6aa34c80 00:15:07.195 [2024-07-14 21:14:18.490877] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:07.195 pt4 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:07.195 "name": "raid_bdev1", 00:15:07.195 "uuid": "0429f0dc-4226-11ef-aa83-81fbc7dfef58", 00:15:07.195 "strip_size_kb": 64, 00:15:07.195 "state": "online", 00:15:07.195 "raid_level": "concat", 00:15:07.195 "superblock": true, 00:15:07.195 "num_base_bdevs": 4, 00:15:07.195 "num_base_bdevs_discovered": 4, 00:15:07.195 "num_base_bdevs_operational": 4, 00:15:07.195 "base_bdevs_list": [ 00:15:07.195 { 00:15:07.195 "name": "pt1", 00:15:07.195 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.195 "is_configured": true, 00:15:07.195 "data_offset": 2048, 00:15:07.195 "data_size": 63488 00:15:07.195 }, 00:15:07.195 { 00:15:07.195 "name": "pt2", 00:15:07.195 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.195 "is_configured": true, 00:15:07.195 "data_offset": 2048, 00:15:07.195 "data_size": 63488 00:15:07.195 }, 00:15:07.195 { 00:15:07.195 "name": "pt3", 00:15:07.195 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.195 "is_configured": true, 00:15:07.195 "data_offset": 2048, 00:15:07.195 "data_size": 63488 00:15:07.195 }, 00:15:07.195 { 00:15:07.195 "name": "pt4", 00:15:07.195 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:07.195 "is_configured": true, 00:15:07.195 "data_offset": 2048, 00:15:07.195 "data_size": 63488 00:15:07.195 } 00:15:07.195 ] 00:15:07.195 }' 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:07.195 21:14:18 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:07.761 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:15:07.761 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:15:07.761 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:07.761 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:07.761 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:07.761 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:07.761 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:07.761 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:07.761 [2024-07-14 21:14:19.210535] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:07.761 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:07.761 "name": "raid_bdev1", 00:15:07.761 "aliases": [ 00:15:07.761 "0429f0dc-4226-11ef-aa83-81fbc7dfef58" 00:15:07.761 ], 00:15:07.761 "product_name": "Raid Volume", 00:15:07.761 "block_size": 512, 00:15:07.761 "num_blocks": 253952, 00:15:07.761 "uuid": "0429f0dc-4226-11ef-aa83-81fbc7dfef58", 00:15:07.761 "assigned_rate_limits": { 00:15:07.761 "rw_ios_per_sec": 0, 00:15:07.761 "rw_mbytes_per_sec": 0, 00:15:07.761 "r_mbytes_per_sec": 0, 00:15:07.761 "w_mbytes_per_sec": 0 00:15:07.761 }, 00:15:07.761 "claimed": false, 00:15:07.761 "zoned": false, 00:15:07.761 "supported_io_types": { 00:15:07.761 "read": true, 00:15:07.761 "write": true, 00:15:07.761 "unmap": true, 00:15:07.761 "flush": true, 00:15:07.761 "reset": true, 00:15:07.761 "nvme_admin": false, 00:15:07.761 "nvme_io": false, 00:15:07.761 "nvme_io_md": false, 00:15:07.761 "write_zeroes": true, 00:15:07.761 "zcopy": false, 00:15:07.761 "get_zone_info": false, 00:15:07.761 "zone_management": false, 00:15:07.761 "zone_append": false, 00:15:07.761 "compare": false, 00:15:07.761 "compare_and_write": false, 00:15:07.761 "abort": false, 00:15:07.761 "seek_hole": false, 00:15:07.761 "seek_data": false, 00:15:07.761 "copy": false, 00:15:07.761 "nvme_iov_md": false 00:15:07.761 }, 00:15:07.761 "memory_domains": [ 00:15:07.761 { 00:15:07.761 "dma_device_id": "system", 00:15:07.761 "dma_device_type": 1 00:15:07.761 }, 00:15:07.761 { 00:15:07.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.761 "dma_device_type": 2 00:15:07.761 }, 00:15:07.761 { 00:15:07.761 "dma_device_id": "system", 00:15:07.761 "dma_device_type": 1 00:15:07.761 }, 00:15:07.761 { 00:15:07.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.761 "dma_device_type": 2 00:15:07.761 }, 00:15:07.761 { 00:15:07.761 "dma_device_id": "system", 00:15:07.761 "dma_device_type": 1 00:15:07.761 }, 00:15:07.761 { 00:15:07.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.761 "dma_device_type": 2 00:15:07.761 }, 00:15:07.761 { 00:15:07.761 "dma_device_id": "system", 00:15:07.761 "dma_device_type": 1 00:15:07.761 }, 00:15:07.761 { 00:15:07.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:07.761 "dma_device_type": 2 00:15:07.761 } 00:15:07.761 ], 00:15:07.761 "driver_specific": { 00:15:07.761 "raid": { 00:15:07.761 "uuid": "0429f0dc-4226-11ef-aa83-81fbc7dfef58", 00:15:07.761 "strip_size_kb": 64, 00:15:07.761 "state": "online", 00:15:07.761 "raid_level": "concat", 00:15:07.761 "superblock": true, 00:15:07.761 "num_base_bdevs": 4, 00:15:07.761 "num_base_bdevs_discovered": 4, 00:15:07.761 "num_base_bdevs_operational": 4, 00:15:07.761 "base_bdevs_list": [ 00:15:07.761 { 00:15:07.761 "name": "pt1", 00:15:07.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:07.761 "is_configured": true, 00:15:07.762 "data_offset": 2048, 00:15:07.762 "data_size": 63488 00:15:07.762 }, 00:15:07.762 { 00:15:07.762 "name": "pt2", 00:15:07.762 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:07.762 "is_configured": true, 00:15:07.762 "data_offset": 2048, 00:15:07.762 "data_size": 63488 00:15:07.762 }, 00:15:07.762 { 00:15:07.762 "name": "pt3", 00:15:07.762 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:07.762 "is_configured": true, 00:15:07.762 "data_offset": 2048, 00:15:07.762 "data_size": 63488 00:15:07.762 }, 00:15:07.762 { 00:15:07.762 "name": "pt4", 00:15:07.762 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:07.762 "is_configured": true, 00:15:07.762 "data_offset": 2048, 00:15:07.762 "data_size": 63488 00:15:07.762 } 00:15:07.762 ] 00:15:07.762 } 00:15:07.762 } 00:15:07.762 }' 00:15:07.762 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:07.762 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:15:07.762 pt2 00:15:07.762 pt3 00:15:07.762 pt4' 00:15:07.762 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:07.762 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:15:07.762 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:08.020 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:08.020 "name": "pt1", 00:15:08.020 "aliases": [ 00:15:08.020 "00000000-0000-0000-0000-000000000001" 00:15:08.020 ], 00:15:08.020 "product_name": "passthru", 00:15:08.020 "block_size": 512, 00:15:08.020 "num_blocks": 65536, 00:15:08.020 "uuid": "00000000-0000-0000-0000-000000000001", 00:15:08.020 "assigned_rate_limits": { 00:15:08.020 "rw_ios_per_sec": 0, 00:15:08.020 "rw_mbytes_per_sec": 0, 00:15:08.020 "r_mbytes_per_sec": 0, 00:15:08.020 "w_mbytes_per_sec": 0 00:15:08.020 }, 00:15:08.020 "claimed": true, 00:15:08.020 "claim_type": "exclusive_write", 00:15:08.020 "zoned": false, 00:15:08.020 "supported_io_types": { 00:15:08.020 "read": true, 00:15:08.020 "write": true, 00:15:08.020 "unmap": true, 00:15:08.020 "flush": true, 00:15:08.020 "reset": true, 00:15:08.020 "nvme_admin": false, 00:15:08.020 "nvme_io": false, 00:15:08.020 "nvme_io_md": false, 00:15:08.020 "write_zeroes": true, 00:15:08.020 "zcopy": true, 00:15:08.020 "get_zone_info": false, 00:15:08.020 "zone_management": false, 00:15:08.020 "zone_append": false, 00:15:08.020 "compare": false, 00:15:08.020 "compare_and_write": false, 00:15:08.020 "abort": true, 00:15:08.020 "seek_hole": false, 00:15:08.020 "seek_data": false, 00:15:08.020 "copy": true, 00:15:08.020 "nvme_iov_md": false 00:15:08.020 }, 00:15:08.020 "memory_domains": [ 00:15:08.020 { 00:15:08.020 "dma_device_id": "system", 00:15:08.020 "dma_device_type": 1 00:15:08.020 }, 00:15:08.020 { 00:15:08.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.020 "dma_device_type": 2 00:15:08.020 } 00:15:08.020 ], 00:15:08.020 "driver_specific": { 00:15:08.020 "passthru": { 00:15:08.020 "name": "pt1", 00:15:08.020 "base_bdev_name": "malloc1" 00:15:08.020 } 00:15:08.020 } 00:15:08.020 }' 00:15:08.020 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.020 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.020 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:08.020 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.020 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.020 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:08.021 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.021 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.021 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:08.021 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.021 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.021 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:08.021 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:08.021 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:15:08.021 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:08.279 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:08.279 "name": "pt2", 00:15:08.279 "aliases": [ 00:15:08.279 "00000000-0000-0000-0000-000000000002" 00:15:08.279 ], 00:15:08.279 "product_name": "passthru", 00:15:08.279 "block_size": 512, 00:15:08.279 "num_blocks": 65536, 00:15:08.279 "uuid": "00000000-0000-0000-0000-000000000002", 00:15:08.279 "assigned_rate_limits": { 00:15:08.279 "rw_ios_per_sec": 0, 00:15:08.279 "rw_mbytes_per_sec": 0, 00:15:08.279 "r_mbytes_per_sec": 0, 00:15:08.279 "w_mbytes_per_sec": 0 00:15:08.279 }, 00:15:08.279 "claimed": true, 00:15:08.279 "claim_type": "exclusive_write", 00:15:08.279 "zoned": false, 00:15:08.279 "supported_io_types": { 00:15:08.279 "read": true, 00:15:08.279 "write": true, 00:15:08.279 "unmap": true, 00:15:08.279 "flush": true, 00:15:08.279 "reset": true, 00:15:08.279 "nvme_admin": false, 00:15:08.279 "nvme_io": false, 00:15:08.279 "nvme_io_md": false, 00:15:08.279 "write_zeroes": true, 00:15:08.279 "zcopy": true, 00:15:08.279 "get_zone_info": false, 00:15:08.279 "zone_management": false, 00:15:08.279 "zone_append": false, 00:15:08.279 "compare": false, 00:15:08.279 "compare_and_write": false, 00:15:08.279 "abort": true, 00:15:08.279 "seek_hole": false, 00:15:08.279 "seek_data": false, 00:15:08.279 "copy": true, 00:15:08.279 "nvme_iov_md": false 00:15:08.279 }, 00:15:08.279 "memory_domains": [ 00:15:08.279 { 00:15:08.279 "dma_device_id": "system", 00:15:08.279 "dma_device_type": 1 00:15:08.279 }, 00:15:08.279 { 00:15:08.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.279 "dma_device_type": 2 00:15:08.279 } 00:15:08.279 ], 00:15:08.279 "driver_specific": { 00:15:08.279 "passthru": { 00:15:08.279 "name": "pt2", 00:15:08.279 "base_bdev_name": "malloc2" 00:15:08.279 } 00:15:08.279 } 00:15:08.279 }' 00:15:08.279 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.279 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.279 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:08.279 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.279 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.279 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:08.279 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.279 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.279 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:08.279 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.279 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.538 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:08.538 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:08.538 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:15:08.538 21:14:19 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:08.796 "name": "pt3", 00:15:08.796 "aliases": [ 00:15:08.796 "00000000-0000-0000-0000-000000000003" 00:15:08.796 ], 00:15:08.796 "product_name": "passthru", 00:15:08.796 "block_size": 512, 00:15:08.796 "num_blocks": 65536, 00:15:08.796 "uuid": "00000000-0000-0000-0000-000000000003", 00:15:08.796 "assigned_rate_limits": { 00:15:08.796 "rw_ios_per_sec": 0, 00:15:08.796 "rw_mbytes_per_sec": 0, 00:15:08.796 "r_mbytes_per_sec": 0, 00:15:08.796 "w_mbytes_per_sec": 0 00:15:08.796 }, 00:15:08.796 "claimed": true, 00:15:08.796 "claim_type": "exclusive_write", 00:15:08.796 "zoned": false, 00:15:08.796 "supported_io_types": { 00:15:08.796 "read": true, 00:15:08.796 "write": true, 00:15:08.796 "unmap": true, 00:15:08.796 "flush": true, 00:15:08.796 "reset": true, 00:15:08.796 "nvme_admin": false, 00:15:08.796 "nvme_io": false, 00:15:08.796 "nvme_io_md": false, 00:15:08.796 "write_zeroes": true, 00:15:08.796 "zcopy": true, 00:15:08.796 "get_zone_info": false, 00:15:08.796 "zone_management": false, 00:15:08.796 "zone_append": false, 00:15:08.796 "compare": false, 00:15:08.796 "compare_and_write": false, 00:15:08.796 "abort": true, 00:15:08.796 "seek_hole": false, 00:15:08.796 "seek_data": false, 00:15:08.796 "copy": true, 00:15:08.796 "nvme_iov_md": false 00:15:08.796 }, 00:15:08.796 "memory_domains": [ 00:15:08.796 { 00:15:08.796 "dma_device_id": "system", 00:15:08.796 "dma_device_type": 1 00:15:08.796 }, 00:15:08.796 { 00:15:08.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:08.796 "dma_device_type": 2 00:15:08.796 } 00:15:08.796 ], 00:15:08.796 "driver_specific": { 00:15:08.796 "passthru": { 00:15:08.796 "name": "pt3", 00:15:08.796 "base_bdev_name": "malloc3" 00:15:08.796 } 00:15:08.796 } 00:15:08.796 }' 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:15:08.796 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:09.054 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:09.054 "name": "pt4", 00:15:09.054 "aliases": [ 00:15:09.054 "00000000-0000-0000-0000-000000000004" 00:15:09.054 ], 00:15:09.054 "product_name": "passthru", 00:15:09.054 "block_size": 512, 00:15:09.054 "num_blocks": 65536, 00:15:09.054 "uuid": "00000000-0000-0000-0000-000000000004", 00:15:09.054 "assigned_rate_limits": { 00:15:09.054 "rw_ios_per_sec": 0, 00:15:09.054 "rw_mbytes_per_sec": 0, 00:15:09.054 "r_mbytes_per_sec": 0, 00:15:09.054 "w_mbytes_per_sec": 0 00:15:09.054 }, 00:15:09.054 "claimed": true, 00:15:09.054 "claim_type": "exclusive_write", 00:15:09.054 "zoned": false, 00:15:09.054 "supported_io_types": { 00:15:09.054 "read": true, 00:15:09.054 "write": true, 00:15:09.054 "unmap": true, 00:15:09.054 "flush": true, 00:15:09.054 "reset": true, 00:15:09.054 "nvme_admin": false, 00:15:09.054 "nvme_io": false, 00:15:09.054 "nvme_io_md": false, 00:15:09.054 "write_zeroes": true, 00:15:09.054 "zcopy": true, 00:15:09.054 "get_zone_info": false, 00:15:09.054 "zone_management": false, 00:15:09.054 "zone_append": false, 00:15:09.054 "compare": false, 00:15:09.054 "compare_and_write": false, 00:15:09.054 "abort": true, 00:15:09.054 "seek_hole": false, 00:15:09.054 "seek_data": false, 00:15:09.054 "copy": true, 00:15:09.054 "nvme_iov_md": false 00:15:09.054 }, 00:15:09.054 "memory_domains": [ 00:15:09.054 { 00:15:09.054 "dma_device_id": "system", 00:15:09.054 "dma_device_type": 1 00:15:09.054 }, 00:15:09.054 { 00:15:09.054 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:09.054 "dma_device_type": 2 00:15:09.054 } 00:15:09.054 ], 00:15:09.054 "driver_specific": { 00:15:09.054 "passthru": { 00:15:09.054 "name": "pt4", 00:15:09.054 "base_bdev_name": "malloc4" 00:15:09.054 } 00:15:09.054 } 00:15:09.054 }' 00:15:09.054 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:09.054 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:09.054 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:09.054 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:09.054 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:09.054 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:09.054 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:09.054 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:09.054 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:09.054 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:09.054 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:09.054 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:09.055 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:15:09.055 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:15:09.312 [2024-07-14 21:14:20.758644] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 0429f0dc-4226-11ef-aa83-81fbc7dfef58 '!=' 0429f0dc-4226-11ef-aa83-81fbc7dfef58 ']' 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy concat 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 62207 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 62207 ']' 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 62207 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 62207 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:15:09.312 killing process with pid 62207 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62207' 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 62207 00:15:09.312 [2024-07-14 21:14:20.785209] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:09.312 [2024-07-14 21:14:20.785238] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:09.312 [2024-07-14 21:14:20.785258] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:09.312 [2024-07-14 21:14:20.785263] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3e8e6aa34c80 name raid_bdev1, state offline 00:15:09.312 21:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 62207 00:15:09.312 [2024-07-14 21:14:20.808942] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:09.570 21:14:20 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:15:09.570 00:15:09.570 real 0m12.441s 00:15:09.570 user 0m22.104s 00:15:09.570 sys 0m1.995s 00:15:09.570 21:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:09.570 21:14:20 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.570 ************************************ 00:15:09.570 END TEST raid_superblock_test 00:15:09.570 ************************************ 00:15:09.570 21:14:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:09.570 21:14:21 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test concat 4 read 00:15:09.570 21:14:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:09.570 21:14:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:09.570 21:14:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:09.570 ************************************ 00:15:09.570 START TEST raid_read_error_test 00:15:09.570 ************************************ 00:15:09.570 21:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 read 00:15:09.570 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:09.570 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:09.570 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:15:09.570 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:09.570 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:09.570 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:09.570 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:09.570 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.niUGJ0DxF5 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62604 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62604 /var/tmp/spdk-raid.sock 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 62604 ']' 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:09.571 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:09.571 21:14:21 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:09.571 [2024-07-14 21:14:21.050419] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:09.571 [2024-07-14 21:14:21.050720] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:10.135 EAL: TSC is not safe to use in SMP mode 00:15:10.135 EAL: TSC is not invariant 00:15:10.135 [2024-07-14 21:14:21.575547] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:10.135 [2024-07-14 21:14:21.658200] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:10.135 [2024-07-14 21:14:21.660647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:10.135 [2024-07-14 21:14:21.661509] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.135 [2024-07-14 21:14:21.661524] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:10.701 21:14:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:10.701 21:14:22 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:10.701 21:14:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:10.701 21:14:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:10.958 BaseBdev1_malloc 00:15:10.958 21:14:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:11.216 true 00:15:11.216 21:14:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:11.473 [2024-07-14 21:14:22.781647] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:11.473 [2024-07-14 21:14:22.781706] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.473 [2024-07-14 21:14:22.781741] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc9392e34780 00:15:11.473 [2024-07-14 21:14:22.781749] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.473 [2024-07-14 21:14:22.782310] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.473 [2024-07-14 21:14:22.782335] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:11.473 BaseBdev1 00:15:11.473 21:14:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:11.473 21:14:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:11.473 BaseBdev2_malloc 00:15:11.473 21:14:22 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:11.731 true 00:15:11.731 21:14:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:11.988 [2024-07-14 21:14:23.417660] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:11.988 [2024-07-14 21:14:23.417704] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:11.988 [2024-07-14 21:14:23.417742] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc9392e34c80 00:15:11.988 [2024-07-14 21:14:23.417750] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:11.988 [2024-07-14 21:14:23.418455] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:11.988 [2024-07-14 21:14:23.418481] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:11.988 BaseBdev2 00:15:11.988 21:14:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:11.988 21:14:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:12.260 BaseBdev3_malloc 00:15:12.260 21:14:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:12.528 true 00:15:12.528 21:14:23 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:12.786 [2024-07-14 21:14:24.093680] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:12.786 [2024-07-14 21:14:24.093733] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:12.786 [2024-07-14 21:14:24.093772] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc9392e35180 00:15:12.786 [2024-07-14 21:14:24.093780] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:12.786 [2024-07-14 21:14:24.094396] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:12.786 [2024-07-14 21:14:24.094435] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:12.786 BaseBdev3 00:15:12.786 21:14:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:12.786 21:14:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:13.044 BaseBdev4_malloc 00:15:13.045 21:14:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:13.304 true 00:15:13.304 21:14:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:13.304 [2024-07-14 21:14:24.813678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:13.304 [2024-07-14 21:14:24.813728] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:13.304 [2024-07-14 21:14:24.813767] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc9392e35680 00:15:13.304 [2024-07-14 21:14:24.813774] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:13.304 [2024-07-14 21:14:24.814377] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:13.304 [2024-07-14 21:14:24.814401] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:13.304 BaseBdev4 00:15:13.304 21:14:24 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:13.562 [2024-07-14 21:14:25.053702] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:13.562 [2024-07-14 21:14:25.054257] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:13.562 [2024-07-14 21:14:25.054298] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:13.562 [2024-07-14 21:14:25.054313] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:13.562 [2024-07-14 21:14:25.054387] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc9392e35900 00:15:13.562 [2024-07-14 21:14:25.054393] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:13.562 [2024-07-14 21:14:25.054428] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc9392ea0e20 00:15:13.562 [2024-07-14 21:14:25.054511] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc9392e35900 00:15:13.562 [2024-07-14 21:14:25.054516] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc9392e35900 00:15:13.562 [2024-07-14 21:14:25.054541] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:13.562 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:13.562 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:13.562 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:13.562 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:13.562 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:13.562 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:13.562 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:13.562 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:13.562 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:13.562 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:13.562 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:13.562 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:13.819 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:13.820 "name": "raid_bdev1", 00:15:13.820 "uuid": "0c12fffc-4226-11ef-aa83-81fbc7dfef58", 00:15:13.820 "strip_size_kb": 64, 00:15:13.820 "state": "online", 00:15:13.820 "raid_level": "concat", 00:15:13.820 "superblock": true, 00:15:13.820 "num_base_bdevs": 4, 00:15:13.820 "num_base_bdevs_discovered": 4, 00:15:13.820 "num_base_bdevs_operational": 4, 00:15:13.820 "base_bdevs_list": [ 00:15:13.820 { 00:15:13.820 "name": "BaseBdev1", 00:15:13.820 "uuid": "cef566f9-f252-175a-9a83-e3066606f797", 00:15:13.820 "is_configured": true, 00:15:13.820 "data_offset": 2048, 00:15:13.820 "data_size": 63488 00:15:13.820 }, 00:15:13.820 { 00:15:13.820 "name": "BaseBdev2", 00:15:13.820 "uuid": "5bf13649-4d7e-0054-a10c-1d2883917a38", 00:15:13.820 "is_configured": true, 00:15:13.820 "data_offset": 2048, 00:15:13.820 "data_size": 63488 00:15:13.820 }, 00:15:13.820 { 00:15:13.820 "name": "BaseBdev3", 00:15:13.820 "uuid": "a6e5ee31-25ba-415c-98ee-db42a2250875", 00:15:13.820 "is_configured": true, 00:15:13.820 "data_offset": 2048, 00:15:13.820 "data_size": 63488 00:15:13.820 }, 00:15:13.820 { 00:15:13.820 "name": "BaseBdev4", 00:15:13.820 "uuid": "e48f90bc-6c54-9758-91ed-7e3bd67275c7", 00:15:13.820 "is_configured": true, 00:15:13.820 "data_offset": 2048, 00:15:13.820 "data_size": 63488 00:15:13.820 } 00:15:13.820 ] 00:15:13.820 }' 00:15:13.820 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:13.820 21:14:25 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:14.077 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:14.077 21:14:25 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:14.335 [2024-07-14 21:14:25.705880] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc9392ea0ec0 00:15:15.270 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:15:15.529 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:15.529 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:15.529 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:15.529 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:15.529 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:15.529 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:15.529 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:15.529 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:15.529 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:15.529 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:15.529 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:15.529 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:15.529 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:15.529 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:15.529 21:14:26 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:15.788 21:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:15.788 "name": "raid_bdev1", 00:15:15.788 "uuid": "0c12fffc-4226-11ef-aa83-81fbc7dfef58", 00:15:15.788 "strip_size_kb": 64, 00:15:15.788 "state": "online", 00:15:15.788 "raid_level": "concat", 00:15:15.788 "superblock": true, 00:15:15.788 "num_base_bdevs": 4, 00:15:15.788 "num_base_bdevs_discovered": 4, 00:15:15.788 "num_base_bdevs_operational": 4, 00:15:15.788 "base_bdevs_list": [ 00:15:15.788 { 00:15:15.788 "name": "BaseBdev1", 00:15:15.788 "uuid": "cef566f9-f252-175a-9a83-e3066606f797", 00:15:15.788 "is_configured": true, 00:15:15.788 "data_offset": 2048, 00:15:15.788 "data_size": 63488 00:15:15.788 }, 00:15:15.788 { 00:15:15.788 "name": "BaseBdev2", 00:15:15.788 "uuid": "5bf13649-4d7e-0054-a10c-1d2883917a38", 00:15:15.788 "is_configured": true, 00:15:15.788 "data_offset": 2048, 00:15:15.788 "data_size": 63488 00:15:15.788 }, 00:15:15.788 { 00:15:15.788 "name": "BaseBdev3", 00:15:15.788 "uuid": "a6e5ee31-25ba-415c-98ee-db42a2250875", 00:15:15.788 "is_configured": true, 00:15:15.788 "data_offset": 2048, 00:15:15.788 "data_size": 63488 00:15:15.788 }, 00:15:15.788 { 00:15:15.788 "name": "BaseBdev4", 00:15:15.788 "uuid": "e48f90bc-6c54-9758-91ed-7e3bd67275c7", 00:15:15.788 "is_configured": true, 00:15:15.788 "data_offset": 2048, 00:15:15.788 "data_size": 63488 00:15:15.788 } 00:15:15.788 ] 00:15:15.788 }' 00:15:15.788 21:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:15.788 21:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.047 21:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:16.306 [2024-07-14 21:14:27.599449] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:16.306 [2024-07-14 21:14:27.599476] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:16.306 [2024-07-14 21:14:27.599824] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:16.306 [2024-07-14 21:14:27.599849] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:16.306 [2024-07-14 21:14:27.599857] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:16.306 [2024-07-14 21:14:27.599862] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc9392e35900 name raid_bdev1, state offline 00:15:16.306 0 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62604 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 62604 ']' 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 62604 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 62604 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:15:16.306 killing process with pid 62604 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62604' 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 62604 00:15:16.306 [2024-07-14 21:14:27.627675] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 62604 00:15:16.306 [2024-07-14 21:14:27.650054] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.niUGJ0DxF5 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.53 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:16.306 21:14:27 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.53 != \0\.\0\0 ]] 00:15:16.306 00:15:16.306 real 0m6.789s 00:15:16.306 user 0m10.760s 00:15:16.307 sys 0m1.076s 00:15:16.307 21:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:16.307 21:14:27 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.307 ************************************ 00:15:16.307 END TEST raid_read_error_test 00:15:16.307 ************************************ 00:15:16.566 21:14:27 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:16.566 21:14:27 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test concat 4 write 00:15:16.566 21:14:27 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:16.566 21:14:27 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:16.566 21:14:27 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:16.566 ************************************ 00:15:16.566 START TEST raid_write_error_test 00:15:16.566 ************************************ 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test concat 4 write 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=concat 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' concat '!=' raid1 ']' 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@799 -- # strip_size=64 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@800 -- # create_arg+=' -z 64' 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.uJHgcZc1i7 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=62738 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 62738 /var/tmp/spdk-raid.sock 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 62738 ']' 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:16.566 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:15:16.566 21:14:27 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:16.566 [2024-07-14 21:14:27.889614] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:16.566 [2024-07-14 21:14:27.889807] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:17.135 EAL: TSC is not safe to use in SMP mode 00:15:17.135 EAL: TSC is not invariant 00:15:17.135 [2024-07-14 21:14:28.426433] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:17.135 [2024-07-14 21:14:28.510616] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:17.135 [2024-07-14 21:14:28.512957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:17.135 [2024-07-14 21:14:28.513895] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.135 [2024-07-14 21:14:28.513909] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:17.394 21:14:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:17.394 21:14:28 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:15:17.394 21:14:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:17.394 21:14:28 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:15:17.652 BaseBdev1_malloc 00:15:17.652 21:14:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:15:17.911 true 00:15:17.911 21:14:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:15:18.171 [2024-07-14 21:14:29.649886] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:15:18.171 [2024-07-14 21:14:29.649960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.171 [2024-07-14 21:14:29.650001] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe5872e34780 00:15:18.171 [2024-07-14 21:14:29.650009] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.171 [2024-07-14 21:14:29.650759] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.171 [2024-07-14 21:14:29.650782] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:15:18.171 BaseBdev1 00:15:18.171 21:14:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:18.171 21:14:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:15:18.430 BaseBdev2_malloc 00:15:18.430 21:14:29 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:15:18.688 true 00:15:18.688 21:14:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:15:18.947 [2024-07-14 21:14:30.277904] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:15:18.947 [2024-07-14 21:14:30.277966] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:18.947 [2024-07-14 21:14:30.278008] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe5872e34c80 00:15:18.947 [2024-07-14 21:14:30.278017] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:18.947 [2024-07-14 21:14:30.278790] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:18.947 [2024-07-14 21:14:30.278829] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:15:18.947 BaseBdev2 00:15:18.947 21:14:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:18.947 21:14:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:15:19.206 BaseBdev3_malloc 00:15:19.206 21:14:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:15:19.206 true 00:15:19.206 21:14:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:15:19.464 [2024-07-14 21:14:30.949919] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:15:19.464 [2024-07-14 21:14:30.949978] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:19.464 [2024-07-14 21:14:30.950018] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe5872e35180 00:15:19.464 [2024-07-14 21:14:30.950026] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:19.464 [2024-07-14 21:14:30.950730] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:19.464 [2024-07-14 21:14:30.950784] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:15:19.464 BaseBdev3 00:15:19.464 21:14:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:15:19.464 21:14:30 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:15:19.723 BaseBdev4_malloc 00:15:19.723 21:14:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:15:19.982 true 00:15:19.982 21:14:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:15:20.241 [2024-07-14 21:14:31.629946] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:15:20.241 [2024-07-14 21:14:31.630022] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:15:20.241 [2024-07-14 21:14:31.630064] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe5872e35680 00:15:20.241 [2024-07-14 21:14:31.630072] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:15:20.241 [2024-07-14 21:14:31.630921] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:15:20.241 [2024-07-14 21:14:31.630978] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:15:20.241 BaseBdev4 00:15:20.241 21:14:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -z 64 -r concat -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:15:20.500 [2024-07-14 21:14:31.841980] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:20.500 [2024-07-14 21:14:31.842626] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:20.500 [2024-07-14 21:14:31.842650] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:20.500 [2024-07-14 21:14:31.842665] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:20.500 [2024-07-14 21:14:31.842736] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xe5872e35900 00:15:20.500 [2024-07-14 21:14:31.842742] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 253952, blocklen 512 00:15:20.500 [2024-07-14 21:14:31.842799] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe5872ea0e20 00:15:20.500 [2024-07-14 21:14:31.842919] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xe5872e35900 00:15:20.500 [2024-07-14 21:14:31.842924] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xe5872e35900 00:15:20.500 [2024-07-14 21:14:31.842951] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:20.500 21:14:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:20.500 21:14:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:20.500 21:14:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:20.500 21:14:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:20.500 21:14:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:20.500 21:14:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:20.500 21:14:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:20.500 21:14:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:20.500 21:14:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:20.500 21:14:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:20.500 21:14:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:20.500 21:14:31 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:20.759 21:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:20.759 "name": "raid_bdev1", 00:15:20.759 "uuid": "101ecf16-4226-11ef-aa83-81fbc7dfef58", 00:15:20.759 "strip_size_kb": 64, 00:15:20.759 "state": "online", 00:15:20.759 "raid_level": "concat", 00:15:20.759 "superblock": true, 00:15:20.759 "num_base_bdevs": 4, 00:15:20.759 "num_base_bdevs_discovered": 4, 00:15:20.759 "num_base_bdevs_operational": 4, 00:15:20.759 "base_bdevs_list": [ 00:15:20.759 { 00:15:20.759 "name": "BaseBdev1", 00:15:20.759 "uuid": "eb6fb4ed-05f2-9757-b980-e0157e568b37", 00:15:20.759 "is_configured": true, 00:15:20.759 "data_offset": 2048, 00:15:20.759 "data_size": 63488 00:15:20.759 }, 00:15:20.759 { 00:15:20.759 "name": "BaseBdev2", 00:15:20.759 "uuid": "c2844067-4646-ad50-971c-f91ae700ed8b", 00:15:20.759 "is_configured": true, 00:15:20.759 "data_offset": 2048, 00:15:20.759 "data_size": 63488 00:15:20.759 }, 00:15:20.759 { 00:15:20.759 "name": "BaseBdev3", 00:15:20.759 "uuid": "0fe1ff5f-173e-0553-8b1d-747398e9916f", 00:15:20.759 "is_configured": true, 00:15:20.759 "data_offset": 2048, 00:15:20.759 "data_size": 63488 00:15:20.759 }, 00:15:20.759 { 00:15:20.759 "name": "BaseBdev4", 00:15:20.759 "uuid": "ca480a20-0c94-495f-a381-7de7bcd2e668", 00:15:20.759 "is_configured": true, 00:15:20.759 "data_offset": 2048, 00:15:20.759 "data_size": 63488 00:15:20.759 } 00:15:20.760 ] 00:15:20.760 }' 00:15:20.760 21:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:20.760 21:14:32 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:21.019 21:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:15:21.019 21:14:32 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:15:21.019 [2024-07-14 21:14:32.446186] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xe5872ea0ec0 00:15:21.954 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:15:22.212 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:15:22.212 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ concat = \r\a\i\d\1 ]] 00:15:22.212 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:15:22.212 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online concat 64 4 00:15:22.212 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:15:22.212 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:22.212 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=concat 00:15:22.212 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=64 00:15:22.212 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:22.212 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:22.212 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:22.212 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:22.212 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:22.212 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:22.212 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:15:22.469 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:22.469 "name": "raid_bdev1", 00:15:22.469 "uuid": "101ecf16-4226-11ef-aa83-81fbc7dfef58", 00:15:22.469 "strip_size_kb": 64, 00:15:22.469 "state": "online", 00:15:22.469 "raid_level": "concat", 00:15:22.469 "superblock": true, 00:15:22.469 "num_base_bdevs": 4, 00:15:22.469 "num_base_bdevs_discovered": 4, 00:15:22.469 "num_base_bdevs_operational": 4, 00:15:22.469 "base_bdevs_list": [ 00:15:22.469 { 00:15:22.469 "name": "BaseBdev1", 00:15:22.469 "uuid": "eb6fb4ed-05f2-9757-b980-e0157e568b37", 00:15:22.469 "is_configured": true, 00:15:22.469 "data_offset": 2048, 00:15:22.469 "data_size": 63488 00:15:22.469 }, 00:15:22.469 { 00:15:22.469 "name": "BaseBdev2", 00:15:22.469 "uuid": "c2844067-4646-ad50-971c-f91ae700ed8b", 00:15:22.469 "is_configured": true, 00:15:22.469 "data_offset": 2048, 00:15:22.469 "data_size": 63488 00:15:22.469 }, 00:15:22.469 { 00:15:22.469 "name": "BaseBdev3", 00:15:22.469 "uuid": "0fe1ff5f-173e-0553-8b1d-747398e9916f", 00:15:22.469 "is_configured": true, 00:15:22.469 "data_offset": 2048, 00:15:22.469 "data_size": 63488 00:15:22.469 }, 00:15:22.469 { 00:15:22.469 "name": "BaseBdev4", 00:15:22.469 "uuid": "ca480a20-0c94-495f-a381-7de7bcd2e668", 00:15:22.469 "is_configured": true, 00:15:22.469 "data_offset": 2048, 00:15:22.469 "data_size": 63488 00:15:22.469 } 00:15:22.469 ] 00:15:22.469 }' 00:15:22.469 21:14:33 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:22.469 21:14:33 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:22.727 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:15:22.986 [2024-07-14 21:14:34.419779] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:15:22.986 [2024-07-14 21:14:34.419805] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:22.986 [2024-07-14 21:14:34.420143] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:22.986 [2024-07-14 21:14:34.420153] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:22.986 [2024-07-14 21:14:34.420161] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:22.986 [2024-07-14 21:14:34.420165] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xe5872e35900 name raid_bdev1, state offline 00:15:22.986 0 00:15:22.986 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 62738 00:15:22.986 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 62738 ']' 00:15:22.986 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 62738 00:15:22.986 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:15:22.986 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:22.986 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 62738 00:15:22.986 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:15:22.986 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:15:22.986 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:15:22.986 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62738' 00:15:22.986 killing process with pid 62738 00:15:22.986 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 62738 00:15:22.986 [2024-07-14 21:14:34.449189] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:22.986 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 62738 00:15:22.986 [2024-07-14 21:14:34.472427] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:23.245 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.uJHgcZc1i7 00:15:23.245 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:15:23.245 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:15:23.245 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.51 00:15:23.245 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy concat 00:15:23.245 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:23.245 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@215 -- # return 1 00:15:23.245 21:14:34 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@847 -- # [[ 0.51 != \0\.\0\0 ]] 00:15:23.245 00:15:23.245 real 0m6.787s 00:15:23.245 user 0m10.665s 00:15:23.245 sys 0m1.130s 00:15:23.245 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:23.245 21:14:34 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.245 ************************************ 00:15:23.245 END TEST raid_write_error_test 00:15:23.245 ************************************ 00:15:23.245 21:14:34 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:23.245 21:14:34 bdev_raid -- bdev/bdev_raid.sh@866 -- # for level in raid0 concat raid1 00:15:23.245 21:14:34 bdev_raid -- bdev/bdev_raid.sh@867 -- # run_test raid_state_function_test raid_state_function_test raid1 4 false 00:15:23.245 21:14:34 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:23.245 21:14:34 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:23.245 21:14:34 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:23.245 ************************************ 00:15:23.245 START TEST raid_state_function_test 00:15:23.245 ************************************ 00:15:23.245 21:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 false 00:15:23.245 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:23.245 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:23.245 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@222 -- # local superblock=false 00:15:23.245 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:23.245 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:23.245 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:23.245 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@237 -- # '[' false = true ']' 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@240 -- # superblock_create_arg= 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@244 -- # raid_pid=62874 00:15:23.246 Process raid pid: 62874 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 62874' 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@246 -- # waitforlisten 62874 /var/tmp/spdk-raid.sock 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@829 -- # '[' -z 62874 ']' 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:23.246 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:23.246 21:14:34 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:23.246 [2024-07-14 21:14:34.723137] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:23.246 [2024-07-14 21:14:34.723390] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:23.813 EAL: TSC is not safe to use in SMP mode 00:15:23.813 EAL: TSC is not invariant 00:15:23.813 [2024-07-14 21:14:35.244416] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.813 [2024-07-14 21:14:35.330448] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:23.813 [2024-07-14 21:14:35.332843] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:23.813 [2024-07-14 21:14:35.333674] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:23.813 [2024-07-14 21:14:35.333689] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:24.380 21:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:24.380 21:14:35 bdev_raid.raid_state_function_test -- common/autotest_common.sh@862 -- # return 0 00:15:24.380 21:14:35 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:24.640 [2024-07-14 21:14:36.061857] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:24.640 [2024-07-14 21:14:36.061922] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:24.640 [2024-07-14 21:14:36.061927] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:24.640 [2024-07-14 21:14:36.061950] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:24.640 [2024-07-14 21:14:36.061954] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:24.640 [2024-07-14 21:14:36.061960] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:24.640 [2024-07-14 21:14:36.061963] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:24.640 [2024-07-14 21:14:36.061969] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:24.640 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:24.640 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:24.640 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:24.640 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:24.640 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:24.640 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:24.640 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:24.640 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:24.640 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:24.640 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:24.640 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:24.640 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:24.898 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:24.898 "name": "Existed_Raid", 00:15:24.898 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.898 "strip_size_kb": 0, 00:15:24.898 "state": "configuring", 00:15:24.898 "raid_level": "raid1", 00:15:24.898 "superblock": false, 00:15:24.898 "num_base_bdevs": 4, 00:15:24.898 "num_base_bdevs_discovered": 0, 00:15:24.899 "num_base_bdevs_operational": 4, 00:15:24.899 "base_bdevs_list": [ 00:15:24.899 { 00:15:24.899 "name": "BaseBdev1", 00:15:24.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.899 "is_configured": false, 00:15:24.899 "data_offset": 0, 00:15:24.899 "data_size": 0 00:15:24.899 }, 00:15:24.899 { 00:15:24.899 "name": "BaseBdev2", 00:15:24.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.899 "is_configured": false, 00:15:24.899 "data_offset": 0, 00:15:24.899 "data_size": 0 00:15:24.899 }, 00:15:24.899 { 00:15:24.899 "name": "BaseBdev3", 00:15:24.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.899 "is_configured": false, 00:15:24.899 "data_offset": 0, 00:15:24.899 "data_size": 0 00:15:24.899 }, 00:15:24.899 { 00:15:24.899 "name": "BaseBdev4", 00:15:24.899 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:24.899 "is_configured": false, 00:15:24.899 "data_offset": 0, 00:15:24.899 "data_size": 0 00:15:24.899 } 00:15:24.899 ] 00:15:24.899 }' 00:15:24.899 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:24.899 21:14:36 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:25.157 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:25.416 [2024-07-14 21:14:36.853881] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:25.416 [2024-07-14 21:14:36.853904] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x31ea12234500 name Existed_Raid, state configuring 00:15:25.416 21:14:36 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:25.675 [2024-07-14 21:14:37.065889] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:25.675 [2024-07-14 21:14:37.065941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:25.675 [2024-07-14 21:14:37.065946] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:25.675 [2024-07-14 21:14:37.065968] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:25.675 [2024-07-14 21:14:37.065971] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:25.675 [2024-07-14 21:14:37.065977] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:25.675 [2024-07-14 21:14:37.065980] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:25.675 [2024-07-14 21:14:37.065986] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:25.675 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:25.933 [2024-07-14 21:14:37.326846] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:25.933 BaseBdev1 00:15:25.933 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:25.933 21:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:25.933 21:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:25.933 21:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:25.933 21:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:25.933 21:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:25.933 21:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:26.239 21:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:26.497 [ 00:15:26.497 { 00:15:26.497 "name": "BaseBdev1", 00:15:26.497 "aliases": [ 00:15:26.497 "13639869-4226-11ef-aa83-81fbc7dfef58" 00:15:26.497 ], 00:15:26.497 "product_name": "Malloc disk", 00:15:26.497 "block_size": 512, 00:15:26.497 "num_blocks": 65536, 00:15:26.497 "uuid": "13639869-4226-11ef-aa83-81fbc7dfef58", 00:15:26.497 "assigned_rate_limits": { 00:15:26.497 "rw_ios_per_sec": 0, 00:15:26.497 "rw_mbytes_per_sec": 0, 00:15:26.497 "r_mbytes_per_sec": 0, 00:15:26.497 "w_mbytes_per_sec": 0 00:15:26.497 }, 00:15:26.497 "claimed": true, 00:15:26.497 "claim_type": "exclusive_write", 00:15:26.497 "zoned": false, 00:15:26.497 "supported_io_types": { 00:15:26.497 "read": true, 00:15:26.497 "write": true, 00:15:26.497 "unmap": true, 00:15:26.497 "flush": true, 00:15:26.497 "reset": true, 00:15:26.497 "nvme_admin": false, 00:15:26.497 "nvme_io": false, 00:15:26.497 "nvme_io_md": false, 00:15:26.497 "write_zeroes": true, 00:15:26.497 "zcopy": true, 00:15:26.497 "get_zone_info": false, 00:15:26.497 "zone_management": false, 00:15:26.497 "zone_append": false, 00:15:26.497 "compare": false, 00:15:26.497 "compare_and_write": false, 00:15:26.497 "abort": true, 00:15:26.497 "seek_hole": false, 00:15:26.497 "seek_data": false, 00:15:26.497 "copy": true, 00:15:26.497 "nvme_iov_md": false 00:15:26.497 }, 00:15:26.497 "memory_domains": [ 00:15:26.497 { 00:15:26.497 "dma_device_id": "system", 00:15:26.497 "dma_device_type": 1 00:15:26.497 }, 00:15:26.497 { 00:15:26.497 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:26.497 "dma_device_type": 2 00:15:26.497 } 00:15:26.497 ], 00:15:26.497 "driver_specific": {} 00:15:26.497 } 00:15:26.497 ] 00:15:26.497 21:14:37 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:26.497 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:26.497 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:26.497 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:26.497 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:26.497 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:26.497 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:26.497 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:26.497 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:26.497 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:26.497 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:26.497 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:26.497 21:14:37 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:26.755 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:26.755 "name": "Existed_Raid", 00:15:26.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.755 "strip_size_kb": 0, 00:15:26.755 "state": "configuring", 00:15:26.755 "raid_level": "raid1", 00:15:26.755 "superblock": false, 00:15:26.755 "num_base_bdevs": 4, 00:15:26.755 "num_base_bdevs_discovered": 1, 00:15:26.755 "num_base_bdevs_operational": 4, 00:15:26.755 "base_bdevs_list": [ 00:15:26.755 { 00:15:26.755 "name": "BaseBdev1", 00:15:26.755 "uuid": "13639869-4226-11ef-aa83-81fbc7dfef58", 00:15:26.755 "is_configured": true, 00:15:26.755 "data_offset": 0, 00:15:26.755 "data_size": 65536 00:15:26.755 }, 00:15:26.755 { 00:15:26.755 "name": "BaseBdev2", 00:15:26.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.755 "is_configured": false, 00:15:26.755 "data_offset": 0, 00:15:26.755 "data_size": 0 00:15:26.755 }, 00:15:26.755 { 00:15:26.755 "name": "BaseBdev3", 00:15:26.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.755 "is_configured": false, 00:15:26.755 "data_offset": 0, 00:15:26.755 "data_size": 0 00:15:26.755 }, 00:15:26.755 { 00:15:26.755 "name": "BaseBdev4", 00:15:26.755 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:26.755 "is_configured": false, 00:15:26.755 "data_offset": 0, 00:15:26.755 "data_size": 0 00:15:26.755 } 00:15:26.755 ] 00:15:26.755 }' 00:15:26.755 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:26.755 21:14:38 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.013 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:27.271 [2024-07-14 21:14:38.565949] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:27.271 [2024-07-14 21:14:38.565992] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x31ea12234500 name Existed_Raid, state configuring 00:15:27.271 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:27.271 [2024-07-14 21:14:38.773976] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:27.271 [2024-07-14 21:14:38.774839] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:27.271 [2024-07-14 21:14:38.774872] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:27.271 [2024-07-14 21:14:38.774877] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:27.271 [2024-07-14 21:14:38.774901] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:27.271 [2024-07-14 21:14:38.774918] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:27.271 [2024-07-14 21:14:38.774941] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:27.271 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:27.271 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:27.271 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:27.271 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:27.271 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:27.271 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:27.271 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:27.271 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:27.271 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:27.271 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:27.271 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:27.271 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:27.272 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:27.272 21:14:38 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:27.530 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:27.530 "name": "Existed_Raid", 00:15:27.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.530 "strip_size_kb": 0, 00:15:27.530 "state": "configuring", 00:15:27.530 "raid_level": "raid1", 00:15:27.530 "superblock": false, 00:15:27.530 "num_base_bdevs": 4, 00:15:27.530 "num_base_bdevs_discovered": 1, 00:15:27.530 "num_base_bdevs_operational": 4, 00:15:27.530 "base_bdevs_list": [ 00:15:27.530 { 00:15:27.530 "name": "BaseBdev1", 00:15:27.530 "uuid": "13639869-4226-11ef-aa83-81fbc7dfef58", 00:15:27.530 "is_configured": true, 00:15:27.530 "data_offset": 0, 00:15:27.530 "data_size": 65536 00:15:27.530 }, 00:15:27.530 { 00:15:27.530 "name": "BaseBdev2", 00:15:27.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.530 "is_configured": false, 00:15:27.530 "data_offset": 0, 00:15:27.530 "data_size": 0 00:15:27.530 }, 00:15:27.530 { 00:15:27.530 "name": "BaseBdev3", 00:15:27.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.530 "is_configured": false, 00:15:27.530 "data_offset": 0, 00:15:27.530 "data_size": 0 00:15:27.530 }, 00:15:27.530 { 00:15:27.530 "name": "BaseBdev4", 00:15:27.530 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:27.530 "is_configured": false, 00:15:27.530 "data_offset": 0, 00:15:27.530 "data_size": 0 00:15:27.530 } 00:15:27.530 ] 00:15:27.530 }' 00:15:27.530 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:27.530 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:27.788 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:28.046 [2024-07-14 21:14:39.522131] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:28.046 BaseBdev2 00:15:28.046 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:28.046 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:28.046 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:28.046 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:28.046 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:28.046 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:28.046 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:28.304 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:28.562 [ 00:15:28.562 { 00:15:28.562 "name": "BaseBdev2", 00:15:28.562 "aliases": [ 00:15:28.562 "14b2b068-4226-11ef-aa83-81fbc7dfef58" 00:15:28.562 ], 00:15:28.562 "product_name": "Malloc disk", 00:15:28.562 "block_size": 512, 00:15:28.562 "num_blocks": 65536, 00:15:28.562 "uuid": "14b2b068-4226-11ef-aa83-81fbc7dfef58", 00:15:28.562 "assigned_rate_limits": { 00:15:28.562 "rw_ios_per_sec": 0, 00:15:28.562 "rw_mbytes_per_sec": 0, 00:15:28.562 "r_mbytes_per_sec": 0, 00:15:28.562 "w_mbytes_per_sec": 0 00:15:28.562 }, 00:15:28.562 "claimed": true, 00:15:28.562 "claim_type": "exclusive_write", 00:15:28.562 "zoned": false, 00:15:28.562 "supported_io_types": { 00:15:28.562 "read": true, 00:15:28.562 "write": true, 00:15:28.562 "unmap": true, 00:15:28.562 "flush": true, 00:15:28.562 "reset": true, 00:15:28.562 "nvme_admin": false, 00:15:28.562 "nvme_io": false, 00:15:28.562 "nvme_io_md": false, 00:15:28.562 "write_zeroes": true, 00:15:28.562 "zcopy": true, 00:15:28.562 "get_zone_info": false, 00:15:28.562 "zone_management": false, 00:15:28.562 "zone_append": false, 00:15:28.562 "compare": false, 00:15:28.562 "compare_and_write": false, 00:15:28.562 "abort": true, 00:15:28.562 "seek_hole": false, 00:15:28.562 "seek_data": false, 00:15:28.562 "copy": true, 00:15:28.562 "nvme_iov_md": false 00:15:28.562 }, 00:15:28.562 "memory_domains": [ 00:15:28.562 { 00:15:28.562 "dma_device_id": "system", 00:15:28.562 "dma_device_type": 1 00:15:28.562 }, 00:15:28.562 { 00:15:28.562 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:28.562 "dma_device_type": 2 00:15:28.562 } 00:15:28.562 ], 00:15:28.562 "driver_specific": {} 00:15:28.562 } 00:15:28.562 ] 00:15:28.562 21:14:39 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:28.562 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:28.562 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:28.562 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:28.562 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:28.562 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:28.562 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:28.562 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:28.562 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:28.562 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:28.562 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:28.562 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:28.562 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:28.562 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:28.562 21:14:39 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:28.820 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:28.820 "name": "Existed_Raid", 00:15:28.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.820 "strip_size_kb": 0, 00:15:28.820 "state": "configuring", 00:15:28.820 "raid_level": "raid1", 00:15:28.820 "superblock": false, 00:15:28.820 "num_base_bdevs": 4, 00:15:28.820 "num_base_bdevs_discovered": 2, 00:15:28.820 "num_base_bdevs_operational": 4, 00:15:28.820 "base_bdevs_list": [ 00:15:28.820 { 00:15:28.820 "name": "BaseBdev1", 00:15:28.820 "uuid": "13639869-4226-11ef-aa83-81fbc7dfef58", 00:15:28.820 "is_configured": true, 00:15:28.820 "data_offset": 0, 00:15:28.820 "data_size": 65536 00:15:28.820 }, 00:15:28.820 { 00:15:28.820 "name": "BaseBdev2", 00:15:28.820 "uuid": "14b2b068-4226-11ef-aa83-81fbc7dfef58", 00:15:28.820 "is_configured": true, 00:15:28.820 "data_offset": 0, 00:15:28.820 "data_size": 65536 00:15:28.820 }, 00:15:28.820 { 00:15:28.820 "name": "BaseBdev3", 00:15:28.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.820 "is_configured": false, 00:15:28.820 "data_offset": 0, 00:15:28.820 "data_size": 0 00:15:28.820 }, 00:15:28.820 { 00:15:28.820 "name": "BaseBdev4", 00:15:28.820 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:28.820 "is_configured": false, 00:15:28.820 "data_offset": 0, 00:15:28.820 "data_size": 0 00:15:28.820 } 00:15:28.820 ] 00:15:28.820 }' 00:15:28.820 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:28.820 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:29.078 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:29.337 [2024-07-14 21:14:40.726165] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:29.337 BaseBdev3 00:15:29.337 21:14:40 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:29.337 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:29.337 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:29.337 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:29.337 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:29.337 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:29.337 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:29.595 21:14:40 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:29.854 [ 00:15:29.854 { 00:15:29.854 "name": "BaseBdev3", 00:15:29.854 "aliases": [ 00:15:29.854 "156a695f-4226-11ef-aa83-81fbc7dfef58" 00:15:29.854 ], 00:15:29.854 "product_name": "Malloc disk", 00:15:29.854 "block_size": 512, 00:15:29.854 "num_blocks": 65536, 00:15:29.854 "uuid": "156a695f-4226-11ef-aa83-81fbc7dfef58", 00:15:29.854 "assigned_rate_limits": { 00:15:29.854 "rw_ios_per_sec": 0, 00:15:29.854 "rw_mbytes_per_sec": 0, 00:15:29.854 "r_mbytes_per_sec": 0, 00:15:29.854 "w_mbytes_per_sec": 0 00:15:29.854 }, 00:15:29.854 "claimed": true, 00:15:29.854 "claim_type": "exclusive_write", 00:15:29.854 "zoned": false, 00:15:29.854 "supported_io_types": { 00:15:29.854 "read": true, 00:15:29.854 "write": true, 00:15:29.854 "unmap": true, 00:15:29.854 "flush": true, 00:15:29.854 "reset": true, 00:15:29.854 "nvme_admin": false, 00:15:29.854 "nvme_io": false, 00:15:29.854 "nvme_io_md": false, 00:15:29.854 "write_zeroes": true, 00:15:29.854 "zcopy": true, 00:15:29.854 "get_zone_info": false, 00:15:29.854 "zone_management": false, 00:15:29.854 "zone_append": false, 00:15:29.854 "compare": false, 00:15:29.854 "compare_and_write": false, 00:15:29.854 "abort": true, 00:15:29.854 "seek_hole": false, 00:15:29.854 "seek_data": false, 00:15:29.854 "copy": true, 00:15:29.854 "nvme_iov_md": false 00:15:29.854 }, 00:15:29.854 "memory_domains": [ 00:15:29.854 { 00:15:29.854 "dma_device_id": "system", 00:15:29.854 "dma_device_type": 1 00:15:29.854 }, 00:15:29.854 { 00:15:29.854 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:29.854 "dma_device_type": 2 00:15:29.854 } 00:15:29.854 ], 00:15:29.854 "driver_specific": {} 00:15:29.854 } 00:15:29.854 ] 00:15:29.854 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:29.854 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:29.854 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:29.854 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:29.854 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:29.854 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:29.854 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:29.854 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:29.854 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:29.854 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:29.854 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:29.854 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:29.854 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:29.854 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:29.854 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:30.113 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:30.113 "name": "Existed_Raid", 00:15:30.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.113 "strip_size_kb": 0, 00:15:30.113 "state": "configuring", 00:15:30.113 "raid_level": "raid1", 00:15:30.113 "superblock": false, 00:15:30.113 "num_base_bdevs": 4, 00:15:30.113 "num_base_bdevs_discovered": 3, 00:15:30.113 "num_base_bdevs_operational": 4, 00:15:30.113 "base_bdevs_list": [ 00:15:30.113 { 00:15:30.113 "name": "BaseBdev1", 00:15:30.113 "uuid": "13639869-4226-11ef-aa83-81fbc7dfef58", 00:15:30.113 "is_configured": true, 00:15:30.113 "data_offset": 0, 00:15:30.113 "data_size": 65536 00:15:30.113 }, 00:15:30.113 { 00:15:30.113 "name": "BaseBdev2", 00:15:30.113 "uuid": "14b2b068-4226-11ef-aa83-81fbc7dfef58", 00:15:30.113 "is_configured": true, 00:15:30.113 "data_offset": 0, 00:15:30.113 "data_size": 65536 00:15:30.113 }, 00:15:30.113 { 00:15:30.113 "name": "BaseBdev3", 00:15:30.113 "uuid": "156a695f-4226-11ef-aa83-81fbc7dfef58", 00:15:30.113 "is_configured": true, 00:15:30.113 "data_offset": 0, 00:15:30.113 "data_size": 65536 00:15:30.113 }, 00:15:30.113 { 00:15:30.113 "name": "BaseBdev4", 00:15:30.113 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:30.113 "is_configured": false, 00:15:30.113 "data_offset": 0, 00:15:30.113 "data_size": 0 00:15:30.113 } 00:15:30.113 ] 00:15:30.113 }' 00:15:30.113 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:30.113 21:14:41 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:30.371 21:14:41 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:30.630 [2024-07-14 21:14:42.014184] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:30.630 [2024-07-14 21:14:42.014207] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x31ea12234a00 00:15:30.630 [2024-07-14 21:14:42.014228] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:30.630 [2024-07-14 21:14:42.014253] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x31ea12297e20 00:15:30.630 [2024-07-14 21:14:42.014338] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x31ea12234a00 00:15:30.630 [2024-07-14 21:14:42.014342] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x31ea12234a00 00:15:30.630 [2024-07-14 21:14:42.014371] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:30.630 BaseBdev4 00:15:30.630 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:30.630 21:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:30.630 21:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:30.630 21:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:30.630 21:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:30.630 21:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:30.630 21:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:30.889 21:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:31.147 [ 00:15:31.147 { 00:15:31.147 "name": "BaseBdev4", 00:15:31.147 "aliases": [ 00:15:31.147 "162ef2d5-4226-11ef-aa83-81fbc7dfef58" 00:15:31.147 ], 00:15:31.147 "product_name": "Malloc disk", 00:15:31.147 "block_size": 512, 00:15:31.147 "num_blocks": 65536, 00:15:31.147 "uuid": "162ef2d5-4226-11ef-aa83-81fbc7dfef58", 00:15:31.147 "assigned_rate_limits": { 00:15:31.147 "rw_ios_per_sec": 0, 00:15:31.147 "rw_mbytes_per_sec": 0, 00:15:31.147 "r_mbytes_per_sec": 0, 00:15:31.147 "w_mbytes_per_sec": 0 00:15:31.147 }, 00:15:31.147 "claimed": true, 00:15:31.147 "claim_type": "exclusive_write", 00:15:31.147 "zoned": false, 00:15:31.147 "supported_io_types": { 00:15:31.147 "read": true, 00:15:31.147 "write": true, 00:15:31.147 "unmap": true, 00:15:31.147 "flush": true, 00:15:31.147 "reset": true, 00:15:31.147 "nvme_admin": false, 00:15:31.147 "nvme_io": false, 00:15:31.147 "nvme_io_md": false, 00:15:31.147 "write_zeroes": true, 00:15:31.147 "zcopy": true, 00:15:31.147 "get_zone_info": false, 00:15:31.147 "zone_management": false, 00:15:31.147 "zone_append": false, 00:15:31.147 "compare": false, 00:15:31.147 "compare_and_write": false, 00:15:31.147 "abort": true, 00:15:31.147 "seek_hole": false, 00:15:31.147 "seek_data": false, 00:15:31.147 "copy": true, 00:15:31.147 "nvme_iov_md": false 00:15:31.147 }, 00:15:31.147 "memory_domains": [ 00:15:31.147 { 00:15:31.147 "dma_device_id": "system", 00:15:31.147 "dma_device_type": 1 00:15:31.147 }, 00:15:31.147 { 00:15:31.147 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.147 "dma_device_type": 2 00:15:31.147 } 00:15:31.147 ], 00:15:31.147 "driver_specific": {} 00:15:31.147 } 00:15:31.147 ] 00:15:31.147 21:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:31.148 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:31.148 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:31.148 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:31.148 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:31.148 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:31.148 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:31.148 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:31.148 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:31.148 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:31.148 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:31.148 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:31.148 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:31.148 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:31.148 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:31.406 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:31.406 "name": "Existed_Raid", 00:15:31.406 "uuid": "162ef92f-4226-11ef-aa83-81fbc7dfef58", 00:15:31.406 "strip_size_kb": 0, 00:15:31.406 "state": "online", 00:15:31.406 "raid_level": "raid1", 00:15:31.406 "superblock": false, 00:15:31.406 "num_base_bdevs": 4, 00:15:31.406 "num_base_bdevs_discovered": 4, 00:15:31.406 "num_base_bdevs_operational": 4, 00:15:31.406 "base_bdevs_list": [ 00:15:31.406 { 00:15:31.406 "name": "BaseBdev1", 00:15:31.406 "uuid": "13639869-4226-11ef-aa83-81fbc7dfef58", 00:15:31.406 "is_configured": true, 00:15:31.406 "data_offset": 0, 00:15:31.406 "data_size": 65536 00:15:31.406 }, 00:15:31.406 { 00:15:31.406 "name": "BaseBdev2", 00:15:31.406 "uuid": "14b2b068-4226-11ef-aa83-81fbc7dfef58", 00:15:31.407 "is_configured": true, 00:15:31.407 "data_offset": 0, 00:15:31.407 "data_size": 65536 00:15:31.407 }, 00:15:31.407 { 00:15:31.407 "name": "BaseBdev3", 00:15:31.407 "uuid": "156a695f-4226-11ef-aa83-81fbc7dfef58", 00:15:31.407 "is_configured": true, 00:15:31.407 "data_offset": 0, 00:15:31.407 "data_size": 65536 00:15:31.407 }, 00:15:31.407 { 00:15:31.407 "name": "BaseBdev4", 00:15:31.407 "uuid": "162ef2d5-4226-11ef-aa83-81fbc7dfef58", 00:15:31.407 "is_configured": true, 00:15:31.407 "data_offset": 0, 00:15:31.407 "data_size": 65536 00:15:31.407 } 00:15:31.407 ] 00:15:31.407 }' 00:15:31.407 21:14:42 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:31.407 21:14:42 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:31.665 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:31.665 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:31.665 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:31.665 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:31.665 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:31.665 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:31.665 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:31.665 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:31.923 [2024-07-14 21:14:43.278131] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:31.923 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:31.923 "name": "Existed_Raid", 00:15:31.923 "aliases": [ 00:15:31.923 "162ef92f-4226-11ef-aa83-81fbc7dfef58" 00:15:31.923 ], 00:15:31.923 "product_name": "Raid Volume", 00:15:31.923 "block_size": 512, 00:15:31.923 "num_blocks": 65536, 00:15:31.923 "uuid": "162ef92f-4226-11ef-aa83-81fbc7dfef58", 00:15:31.923 "assigned_rate_limits": { 00:15:31.923 "rw_ios_per_sec": 0, 00:15:31.923 "rw_mbytes_per_sec": 0, 00:15:31.923 "r_mbytes_per_sec": 0, 00:15:31.923 "w_mbytes_per_sec": 0 00:15:31.923 }, 00:15:31.923 "claimed": false, 00:15:31.923 "zoned": false, 00:15:31.923 "supported_io_types": { 00:15:31.923 "read": true, 00:15:31.923 "write": true, 00:15:31.923 "unmap": false, 00:15:31.923 "flush": false, 00:15:31.923 "reset": true, 00:15:31.923 "nvme_admin": false, 00:15:31.923 "nvme_io": false, 00:15:31.923 "nvme_io_md": false, 00:15:31.923 "write_zeroes": true, 00:15:31.923 "zcopy": false, 00:15:31.923 "get_zone_info": false, 00:15:31.923 "zone_management": false, 00:15:31.923 "zone_append": false, 00:15:31.923 "compare": false, 00:15:31.923 "compare_and_write": false, 00:15:31.923 "abort": false, 00:15:31.923 "seek_hole": false, 00:15:31.923 "seek_data": false, 00:15:31.923 "copy": false, 00:15:31.923 "nvme_iov_md": false 00:15:31.923 }, 00:15:31.923 "memory_domains": [ 00:15:31.923 { 00:15:31.923 "dma_device_id": "system", 00:15:31.923 "dma_device_type": 1 00:15:31.923 }, 00:15:31.923 { 00:15:31.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.923 "dma_device_type": 2 00:15:31.923 }, 00:15:31.923 { 00:15:31.923 "dma_device_id": "system", 00:15:31.923 "dma_device_type": 1 00:15:31.923 }, 00:15:31.923 { 00:15:31.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.923 "dma_device_type": 2 00:15:31.923 }, 00:15:31.923 { 00:15:31.923 "dma_device_id": "system", 00:15:31.923 "dma_device_type": 1 00:15:31.923 }, 00:15:31.923 { 00:15:31.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.923 "dma_device_type": 2 00:15:31.923 }, 00:15:31.923 { 00:15:31.923 "dma_device_id": "system", 00:15:31.923 "dma_device_type": 1 00:15:31.923 }, 00:15:31.923 { 00:15:31.923 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:31.923 "dma_device_type": 2 00:15:31.923 } 00:15:31.923 ], 00:15:31.923 "driver_specific": { 00:15:31.923 "raid": { 00:15:31.923 "uuid": "162ef92f-4226-11ef-aa83-81fbc7dfef58", 00:15:31.923 "strip_size_kb": 0, 00:15:31.923 "state": "online", 00:15:31.923 "raid_level": "raid1", 00:15:31.923 "superblock": false, 00:15:31.923 "num_base_bdevs": 4, 00:15:31.923 "num_base_bdevs_discovered": 4, 00:15:31.923 "num_base_bdevs_operational": 4, 00:15:31.923 "base_bdevs_list": [ 00:15:31.923 { 00:15:31.923 "name": "BaseBdev1", 00:15:31.923 "uuid": "13639869-4226-11ef-aa83-81fbc7dfef58", 00:15:31.923 "is_configured": true, 00:15:31.923 "data_offset": 0, 00:15:31.923 "data_size": 65536 00:15:31.923 }, 00:15:31.923 { 00:15:31.923 "name": "BaseBdev2", 00:15:31.923 "uuid": "14b2b068-4226-11ef-aa83-81fbc7dfef58", 00:15:31.923 "is_configured": true, 00:15:31.923 "data_offset": 0, 00:15:31.923 "data_size": 65536 00:15:31.923 }, 00:15:31.923 { 00:15:31.923 "name": "BaseBdev3", 00:15:31.923 "uuid": "156a695f-4226-11ef-aa83-81fbc7dfef58", 00:15:31.923 "is_configured": true, 00:15:31.923 "data_offset": 0, 00:15:31.923 "data_size": 65536 00:15:31.923 }, 00:15:31.923 { 00:15:31.923 "name": "BaseBdev4", 00:15:31.923 "uuid": "162ef2d5-4226-11ef-aa83-81fbc7dfef58", 00:15:31.923 "is_configured": true, 00:15:31.923 "data_offset": 0, 00:15:31.923 "data_size": 65536 00:15:31.923 } 00:15:31.923 ] 00:15:31.923 } 00:15:31.923 } 00:15:31.923 }' 00:15:31.923 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:31.923 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:31.923 BaseBdev2 00:15:31.923 BaseBdev3 00:15:31.923 BaseBdev4' 00:15:31.923 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:31.923 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:31.923 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:32.181 "name": "BaseBdev1", 00:15:32.181 "aliases": [ 00:15:32.181 "13639869-4226-11ef-aa83-81fbc7dfef58" 00:15:32.181 ], 00:15:32.181 "product_name": "Malloc disk", 00:15:32.181 "block_size": 512, 00:15:32.181 "num_blocks": 65536, 00:15:32.181 "uuid": "13639869-4226-11ef-aa83-81fbc7dfef58", 00:15:32.181 "assigned_rate_limits": { 00:15:32.181 "rw_ios_per_sec": 0, 00:15:32.181 "rw_mbytes_per_sec": 0, 00:15:32.181 "r_mbytes_per_sec": 0, 00:15:32.181 "w_mbytes_per_sec": 0 00:15:32.181 }, 00:15:32.181 "claimed": true, 00:15:32.181 "claim_type": "exclusive_write", 00:15:32.181 "zoned": false, 00:15:32.181 "supported_io_types": { 00:15:32.181 "read": true, 00:15:32.181 "write": true, 00:15:32.181 "unmap": true, 00:15:32.181 "flush": true, 00:15:32.181 "reset": true, 00:15:32.181 "nvme_admin": false, 00:15:32.181 "nvme_io": false, 00:15:32.181 "nvme_io_md": false, 00:15:32.181 "write_zeroes": true, 00:15:32.181 "zcopy": true, 00:15:32.181 "get_zone_info": false, 00:15:32.181 "zone_management": false, 00:15:32.181 "zone_append": false, 00:15:32.181 "compare": false, 00:15:32.181 "compare_and_write": false, 00:15:32.181 "abort": true, 00:15:32.181 "seek_hole": false, 00:15:32.181 "seek_data": false, 00:15:32.181 "copy": true, 00:15:32.181 "nvme_iov_md": false 00:15:32.181 }, 00:15:32.181 "memory_domains": [ 00:15:32.181 { 00:15:32.181 "dma_device_id": "system", 00:15:32.181 "dma_device_type": 1 00:15:32.181 }, 00:15:32.181 { 00:15:32.181 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.181 "dma_device_type": 2 00:15:32.181 } 00:15:32.181 ], 00:15:32.181 "driver_specific": {} 00:15:32.181 }' 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:32.181 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:32.440 "name": "BaseBdev2", 00:15:32.440 "aliases": [ 00:15:32.440 "14b2b068-4226-11ef-aa83-81fbc7dfef58" 00:15:32.440 ], 00:15:32.440 "product_name": "Malloc disk", 00:15:32.440 "block_size": 512, 00:15:32.440 "num_blocks": 65536, 00:15:32.440 "uuid": "14b2b068-4226-11ef-aa83-81fbc7dfef58", 00:15:32.440 "assigned_rate_limits": { 00:15:32.440 "rw_ios_per_sec": 0, 00:15:32.440 "rw_mbytes_per_sec": 0, 00:15:32.440 "r_mbytes_per_sec": 0, 00:15:32.440 "w_mbytes_per_sec": 0 00:15:32.440 }, 00:15:32.440 "claimed": true, 00:15:32.440 "claim_type": "exclusive_write", 00:15:32.440 "zoned": false, 00:15:32.440 "supported_io_types": { 00:15:32.440 "read": true, 00:15:32.440 "write": true, 00:15:32.440 "unmap": true, 00:15:32.440 "flush": true, 00:15:32.440 "reset": true, 00:15:32.440 "nvme_admin": false, 00:15:32.440 "nvme_io": false, 00:15:32.440 "nvme_io_md": false, 00:15:32.440 "write_zeroes": true, 00:15:32.440 "zcopy": true, 00:15:32.440 "get_zone_info": false, 00:15:32.440 "zone_management": false, 00:15:32.440 "zone_append": false, 00:15:32.440 "compare": false, 00:15:32.440 "compare_and_write": false, 00:15:32.440 "abort": true, 00:15:32.440 "seek_hole": false, 00:15:32.440 "seek_data": false, 00:15:32.440 "copy": true, 00:15:32.440 "nvme_iov_md": false 00:15:32.440 }, 00:15:32.440 "memory_domains": [ 00:15:32.440 { 00:15:32.440 "dma_device_id": "system", 00:15:32.440 "dma_device_type": 1 00:15:32.440 }, 00:15:32.440 { 00:15:32.440 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.440 "dma_device_type": 2 00:15:32.440 } 00:15:32.440 ], 00:15:32.440 "driver_specific": {} 00:15:32.440 }' 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:32.440 21:14:43 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:32.700 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:32.700 "name": "BaseBdev3", 00:15:32.700 "aliases": [ 00:15:32.700 "156a695f-4226-11ef-aa83-81fbc7dfef58" 00:15:32.700 ], 00:15:32.700 "product_name": "Malloc disk", 00:15:32.700 "block_size": 512, 00:15:32.700 "num_blocks": 65536, 00:15:32.700 "uuid": "156a695f-4226-11ef-aa83-81fbc7dfef58", 00:15:32.700 "assigned_rate_limits": { 00:15:32.700 "rw_ios_per_sec": 0, 00:15:32.700 "rw_mbytes_per_sec": 0, 00:15:32.700 "r_mbytes_per_sec": 0, 00:15:32.700 "w_mbytes_per_sec": 0 00:15:32.700 }, 00:15:32.700 "claimed": true, 00:15:32.700 "claim_type": "exclusive_write", 00:15:32.700 "zoned": false, 00:15:32.700 "supported_io_types": { 00:15:32.700 "read": true, 00:15:32.700 "write": true, 00:15:32.700 "unmap": true, 00:15:32.700 "flush": true, 00:15:32.700 "reset": true, 00:15:32.700 "nvme_admin": false, 00:15:32.700 "nvme_io": false, 00:15:32.700 "nvme_io_md": false, 00:15:32.700 "write_zeroes": true, 00:15:32.700 "zcopy": true, 00:15:32.700 "get_zone_info": false, 00:15:32.700 "zone_management": false, 00:15:32.700 "zone_append": false, 00:15:32.700 "compare": false, 00:15:32.700 "compare_and_write": false, 00:15:32.700 "abort": true, 00:15:32.700 "seek_hole": false, 00:15:32.700 "seek_data": false, 00:15:32.700 "copy": true, 00:15:32.700 "nvme_iov_md": false 00:15:32.700 }, 00:15:32.700 "memory_domains": [ 00:15:32.700 { 00:15:32.700 "dma_device_id": "system", 00:15:32.700 "dma_device_type": 1 00:15:32.700 }, 00:15:32.700 { 00:15:32.700 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:32.700 "dma_device_type": 2 00:15:32.700 } 00:15:32.700 ], 00:15:32.700 "driver_specific": {} 00:15:32.700 }' 00:15:32.700 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.700 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:32.700 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:32.700 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.700 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:32.700 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:32.700 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.700 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:32.959 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:32.959 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.959 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:32.959 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:32.959 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:32.959 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:32.959 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:33.218 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:33.218 "name": "BaseBdev4", 00:15:33.218 "aliases": [ 00:15:33.218 "162ef2d5-4226-11ef-aa83-81fbc7dfef58" 00:15:33.218 ], 00:15:33.218 "product_name": "Malloc disk", 00:15:33.218 "block_size": 512, 00:15:33.218 "num_blocks": 65536, 00:15:33.218 "uuid": "162ef2d5-4226-11ef-aa83-81fbc7dfef58", 00:15:33.218 "assigned_rate_limits": { 00:15:33.218 "rw_ios_per_sec": 0, 00:15:33.218 "rw_mbytes_per_sec": 0, 00:15:33.218 "r_mbytes_per_sec": 0, 00:15:33.218 "w_mbytes_per_sec": 0 00:15:33.218 }, 00:15:33.218 "claimed": true, 00:15:33.218 "claim_type": "exclusive_write", 00:15:33.218 "zoned": false, 00:15:33.218 "supported_io_types": { 00:15:33.218 "read": true, 00:15:33.218 "write": true, 00:15:33.218 "unmap": true, 00:15:33.218 "flush": true, 00:15:33.218 "reset": true, 00:15:33.218 "nvme_admin": false, 00:15:33.218 "nvme_io": false, 00:15:33.218 "nvme_io_md": false, 00:15:33.218 "write_zeroes": true, 00:15:33.218 "zcopy": true, 00:15:33.218 "get_zone_info": false, 00:15:33.218 "zone_management": false, 00:15:33.218 "zone_append": false, 00:15:33.218 "compare": false, 00:15:33.218 "compare_and_write": false, 00:15:33.218 "abort": true, 00:15:33.218 "seek_hole": false, 00:15:33.218 "seek_data": false, 00:15:33.218 "copy": true, 00:15:33.218 "nvme_iov_md": false 00:15:33.218 }, 00:15:33.218 "memory_domains": [ 00:15:33.218 { 00:15:33.218 "dma_device_id": "system", 00:15:33.218 "dma_device_type": 1 00:15:33.218 }, 00:15:33.218 { 00:15:33.218 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:33.218 "dma_device_type": 2 00:15:33.218 } 00:15:33.218 ], 00:15:33.218 "driver_specific": {} 00:15:33.218 }' 00:15:33.218 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.218 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:33.218 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:33.218 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.218 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:33.218 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:33.218 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.218 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:33.218 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:33.218 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.218 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:33.218 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:33.218 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:33.475 [2024-07-14 21:14:44.802137] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@214 -- # return 0 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:33.475 21:14:44 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.733 21:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:33.733 "name": "Existed_Raid", 00:15:33.733 "uuid": "162ef92f-4226-11ef-aa83-81fbc7dfef58", 00:15:33.733 "strip_size_kb": 0, 00:15:33.733 "state": "online", 00:15:33.733 "raid_level": "raid1", 00:15:33.733 "superblock": false, 00:15:33.733 "num_base_bdevs": 4, 00:15:33.733 "num_base_bdevs_discovered": 3, 00:15:33.733 "num_base_bdevs_operational": 3, 00:15:33.733 "base_bdevs_list": [ 00:15:33.733 { 00:15:33.733 "name": null, 00:15:33.733 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:33.733 "is_configured": false, 00:15:33.733 "data_offset": 0, 00:15:33.733 "data_size": 65536 00:15:33.733 }, 00:15:33.733 { 00:15:33.733 "name": "BaseBdev2", 00:15:33.733 "uuid": "14b2b068-4226-11ef-aa83-81fbc7dfef58", 00:15:33.733 "is_configured": true, 00:15:33.733 "data_offset": 0, 00:15:33.733 "data_size": 65536 00:15:33.733 }, 00:15:33.733 { 00:15:33.733 "name": "BaseBdev3", 00:15:33.733 "uuid": "156a695f-4226-11ef-aa83-81fbc7dfef58", 00:15:33.733 "is_configured": true, 00:15:33.733 "data_offset": 0, 00:15:33.733 "data_size": 65536 00:15:33.733 }, 00:15:33.733 { 00:15:33.733 "name": "BaseBdev4", 00:15:33.733 "uuid": "162ef2d5-4226-11ef-aa83-81fbc7dfef58", 00:15:33.733 "is_configured": true, 00:15:33.733 "data_offset": 0, 00:15:33.733 "data_size": 65536 00:15:33.733 } 00:15:33.733 ] 00:15:33.733 }' 00:15:33.733 21:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:33.733 21:14:45 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:33.992 21:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:33.992 21:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:33.992 21:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:33.992 21:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:34.251 21:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:34.251 21:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.251 21:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:34.508 [2024-07-14 21:14:45.872431] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:34.508 21:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:34.508 21:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:34.508 21:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:34.508 21:14:45 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:34.766 21:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:34.766 21:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:34.766 21:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:15:35.025 [2024-07-14 21:14:46.370485] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:35.025 21:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:35.025 21:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:35.025 21:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.025 21:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:35.282 21:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:35.282 21:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:35.282 21:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:15:35.541 [2024-07-14 21:14:46.884626] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:15:35.541 [2024-07-14 21:14:46.884672] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:35.541 [2024-07-14 21:14:46.890840] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:35.541 [2024-07-14 21:14:46.890854] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:35.541 [2024-07-14 21:14:46.890874] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x31ea12234a00 name Existed_Raid, state offline 00:15:35.541 21:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:35.541 21:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:35.541 21:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:35.541 21:14:46 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:15:35.800 21:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:15:35.800 21:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:15:35.800 21:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:15:35.800 21:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:15:35.800 21:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:35.800 21:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:35.800 BaseBdev2 00:15:36.058 21:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:15:36.058 21:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:36.058 21:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:36.058 21:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:36.058 21:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:36.058 21:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:36.058 21:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:36.316 21:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:36.316 [ 00:15:36.316 { 00:15:36.316 "name": "BaseBdev2", 00:15:36.316 "aliases": [ 00:15:36.316 "195b702b-4226-11ef-aa83-81fbc7dfef58" 00:15:36.316 ], 00:15:36.316 "product_name": "Malloc disk", 00:15:36.316 "block_size": 512, 00:15:36.316 "num_blocks": 65536, 00:15:36.316 "uuid": "195b702b-4226-11ef-aa83-81fbc7dfef58", 00:15:36.316 "assigned_rate_limits": { 00:15:36.316 "rw_ios_per_sec": 0, 00:15:36.316 "rw_mbytes_per_sec": 0, 00:15:36.316 "r_mbytes_per_sec": 0, 00:15:36.316 "w_mbytes_per_sec": 0 00:15:36.316 }, 00:15:36.316 "claimed": false, 00:15:36.316 "zoned": false, 00:15:36.316 "supported_io_types": { 00:15:36.316 "read": true, 00:15:36.316 "write": true, 00:15:36.316 "unmap": true, 00:15:36.316 "flush": true, 00:15:36.316 "reset": true, 00:15:36.316 "nvme_admin": false, 00:15:36.316 "nvme_io": false, 00:15:36.316 "nvme_io_md": false, 00:15:36.316 "write_zeroes": true, 00:15:36.316 "zcopy": true, 00:15:36.316 "get_zone_info": false, 00:15:36.316 "zone_management": false, 00:15:36.316 "zone_append": false, 00:15:36.316 "compare": false, 00:15:36.316 "compare_and_write": false, 00:15:36.316 "abort": true, 00:15:36.317 "seek_hole": false, 00:15:36.317 "seek_data": false, 00:15:36.317 "copy": true, 00:15:36.317 "nvme_iov_md": false 00:15:36.317 }, 00:15:36.317 "memory_domains": [ 00:15:36.317 { 00:15:36.317 "dma_device_id": "system", 00:15:36.317 "dma_device_type": 1 00:15:36.317 }, 00:15:36.317 { 00:15:36.317 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:36.317 "dma_device_type": 2 00:15:36.317 } 00:15:36.317 ], 00:15:36.317 "driver_specific": {} 00:15:36.317 } 00:15:36.317 ] 00:15:36.576 21:14:47 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:36.576 21:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:36.576 21:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:36.576 21:14:47 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:36.576 BaseBdev3 00:15:36.576 21:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:15:36.576 21:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:36.576 21:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:36.576 21:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:36.576 21:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:36.576 21:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:36.576 21:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:36.835 21:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:37.094 [ 00:15:37.094 { 00:15:37.094 "name": "BaseBdev3", 00:15:37.094 "aliases": [ 00:15:37.094 "19cb21fb-4226-11ef-aa83-81fbc7dfef58" 00:15:37.094 ], 00:15:37.094 "product_name": "Malloc disk", 00:15:37.094 "block_size": 512, 00:15:37.094 "num_blocks": 65536, 00:15:37.094 "uuid": "19cb21fb-4226-11ef-aa83-81fbc7dfef58", 00:15:37.094 "assigned_rate_limits": { 00:15:37.094 "rw_ios_per_sec": 0, 00:15:37.094 "rw_mbytes_per_sec": 0, 00:15:37.094 "r_mbytes_per_sec": 0, 00:15:37.094 "w_mbytes_per_sec": 0 00:15:37.094 }, 00:15:37.094 "claimed": false, 00:15:37.094 "zoned": false, 00:15:37.094 "supported_io_types": { 00:15:37.094 "read": true, 00:15:37.094 "write": true, 00:15:37.094 "unmap": true, 00:15:37.094 "flush": true, 00:15:37.094 "reset": true, 00:15:37.094 "nvme_admin": false, 00:15:37.094 "nvme_io": false, 00:15:37.094 "nvme_io_md": false, 00:15:37.094 "write_zeroes": true, 00:15:37.094 "zcopy": true, 00:15:37.094 "get_zone_info": false, 00:15:37.094 "zone_management": false, 00:15:37.094 "zone_append": false, 00:15:37.094 "compare": false, 00:15:37.094 "compare_and_write": false, 00:15:37.094 "abort": true, 00:15:37.094 "seek_hole": false, 00:15:37.094 "seek_data": false, 00:15:37.094 "copy": true, 00:15:37.094 "nvme_iov_md": false 00:15:37.094 }, 00:15:37.094 "memory_domains": [ 00:15:37.094 { 00:15:37.094 "dma_device_id": "system", 00:15:37.094 "dma_device_type": 1 00:15:37.094 }, 00:15:37.094 { 00:15:37.094 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.094 "dma_device_type": 2 00:15:37.094 } 00:15:37.094 ], 00:15:37.094 "driver_specific": {} 00:15:37.094 } 00:15:37.094 ] 00:15:37.094 21:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:37.094 21:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:37.094 21:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:37.094 21:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:37.353 BaseBdev4 00:15:37.353 21:14:48 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:15:37.353 21:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:37.353 21:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:37.353 21:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:37.353 21:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:37.353 21:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:37.353 21:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:37.612 21:14:48 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:37.870 [ 00:15:37.870 { 00:15:37.870 "name": "BaseBdev4", 00:15:37.870 "aliases": [ 00:15:37.870 "1a2e02d2-4226-11ef-aa83-81fbc7dfef58" 00:15:37.870 ], 00:15:37.870 "product_name": "Malloc disk", 00:15:37.870 "block_size": 512, 00:15:37.870 "num_blocks": 65536, 00:15:37.870 "uuid": "1a2e02d2-4226-11ef-aa83-81fbc7dfef58", 00:15:37.870 "assigned_rate_limits": { 00:15:37.870 "rw_ios_per_sec": 0, 00:15:37.870 "rw_mbytes_per_sec": 0, 00:15:37.870 "r_mbytes_per_sec": 0, 00:15:37.870 "w_mbytes_per_sec": 0 00:15:37.870 }, 00:15:37.870 "claimed": false, 00:15:37.870 "zoned": false, 00:15:37.870 "supported_io_types": { 00:15:37.870 "read": true, 00:15:37.870 "write": true, 00:15:37.870 "unmap": true, 00:15:37.870 "flush": true, 00:15:37.870 "reset": true, 00:15:37.870 "nvme_admin": false, 00:15:37.870 "nvme_io": false, 00:15:37.870 "nvme_io_md": false, 00:15:37.870 "write_zeroes": true, 00:15:37.870 "zcopy": true, 00:15:37.870 "get_zone_info": false, 00:15:37.870 "zone_management": false, 00:15:37.870 "zone_append": false, 00:15:37.870 "compare": false, 00:15:37.870 "compare_and_write": false, 00:15:37.870 "abort": true, 00:15:37.870 "seek_hole": false, 00:15:37.870 "seek_data": false, 00:15:37.870 "copy": true, 00:15:37.870 "nvme_iov_md": false 00:15:37.870 }, 00:15:37.870 "memory_domains": [ 00:15:37.870 { 00:15:37.870 "dma_device_id": "system", 00:15:37.870 "dma_device_type": 1 00:15:37.870 }, 00:15:37.870 { 00:15:37.870 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:37.870 "dma_device_type": 2 00:15:37.870 } 00:15:37.870 ], 00:15:37.870 "driver_specific": {} 00:15:37.870 } 00:15:37.870 ] 00:15:37.870 21:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:37.870 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:15:37.870 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:15:37.870 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:38.127 [2024-07-14 21:14:49.430833] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:38.127 [2024-07-14 21:14:49.430885] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:38.127 [2024-07-14 21:14:49.430909] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:38.127 [2024-07-14 21:14:49.431575] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:38.127 [2024-07-14 21:14:49.431591] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:38.127 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:38.127 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:38.127 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:38.127 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:38.127 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:38.127 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:38.127 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.127 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.127 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.127 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.127 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.127 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.385 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:38.385 "name": "Existed_Raid", 00:15:38.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.385 "strip_size_kb": 0, 00:15:38.385 "state": "configuring", 00:15:38.385 "raid_level": "raid1", 00:15:38.385 "superblock": false, 00:15:38.385 "num_base_bdevs": 4, 00:15:38.385 "num_base_bdevs_discovered": 3, 00:15:38.385 "num_base_bdevs_operational": 4, 00:15:38.385 "base_bdevs_list": [ 00:15:38.385 { 00:15:38.385 "name": "BaseBdev1", 00:15:38.385 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.385 "is_configured": false, 00:15:38.385 "data_offset": 0, 00:15:38.385 "data_size": 0 00:15:38.385 }, 00:15:38.385 { 00:15:38.385 "name": "BaseBdev2", 00:15:38.385 "uuid": "195b702b-4226-11ef-aa83-81fbc7dfef58", 00:15:38.385 "is_configured": true, 00:15:38.385 "data_offset": 0, 00:15:38.385 "data_size": 65536 00:15:38.385 }, 00:15:38.385 { 00:15:38.385 "name": "BaseBdev3", 00:15:38.385 "uuid": "19cb21fb-4226-11ef-aa83-81fbc7dfef58", 00:15:38.385 "is_configured": true, 00:15:38.385 "data_offset": 0, 00:15:38.385 "data_size": 65536 00:15:38.385 }, 00:15:38.385 { 00:15:38.385 "name": "BaseBdev4", 00:15:38.385 "uuid": "1a2e02d2-4226-11ef-aa83-81fbc7dfef58", 00:15:38.385 "is_configured": true, 00:15:38.385 "data_offset": 0, 00:15:38.385 "data_size": 65536 00:15:38.385 } 00:15:38.385 ] 00:15:38.385 }' 00:15:38.385 21:14:49 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:38.385 21:14:49 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:38.642 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:15:38.900 [2024-07-14 21:14:50.202842] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:38.900 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:38.900 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:38.900 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:38.900 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:38.900 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:38.900 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:38.900 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:38.900 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:38.900 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:38.900 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:38.900 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:38.900 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:38.900 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:38.900 "name": "Existed_Raid", 00:15:38.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.900 "strip_size_kb": 0, 00:15:38.900 "state": "configuring", 00:15:38.900 "raid_level": "raid1", 00:15:38.900 "superblock": false, 00:15:38.900 "num_base_bdevs": 4, 00:15:38.900 "num_base_bdevs_discovered": 2, 00:15:38.900 "num_base_bdevs_operational": 4, 00:15:38.900 "base_bdevs_list": [ 00:15:38.900 { 00:15:38.900 "name": "BaseBdev1", 00:15:38.900 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:38.900 "is_configured": false, 00:15:38.900 "data_offset": 0, 00:15:38.900 "data_size": 0 00:15:38.900 }, 00:15:38.900 { 00:15:38.900 "name": null, 00:15:38.900 "uuid": "195b702b-4226-11ef-aa83-81fbc7dfef58", 00:15:38.900 "is_configured": false, 00:15:38.900 "data_offset": 0, 00:15:38.900 "data_size": 65536 00:15:38.900 }, 00:15:38.900 { 00:15:38.900 "name": "BaseBdev3", 00:15:38.900 "uuid": "19cb21fb-4226-11ef-aa83-81fbc7dfef58", 00:15:38.900 "is_configured": true, 00:15:38.900 "data_offset": 0, 00:15:38.900 "data_size": 65536 00:15:38.900 }, 00:15:38.900 { 00:15:38.900 "name": "BaseBdev4", 00:15:38.900 "uuid": "1a2e02d2-4226-11ef-aa83-81fbc7dfef58", 00:15:38.900 "is_configured": true, 00:15:38.900 "data_offset": 0, 00:15:38.900 "data_size": 65536 00:15:38.900 } 00:15:38.900 ] 00:15:38.900 }' 00:15:38.900 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:38.900 21:14:50 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:39.467 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:39.467 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:39.467 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:15:39.467 21:14:50 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:39.726 [2024-07-14 21:14:51.194987] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:39.726 BaseBdev1 00:15:39.726 21:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:15:39.726 21:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:39.726 21:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:39.726 21:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:39.726 21:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:39.726 21:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:39.726 21:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:39.984 21:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:40.243 [ 00:15:40.243 { 00:15:40.243 "name": "BaseBdev1", 00:15:40.243 "aliases": [ 00:15:40.243 "1ba7d3bf-4226-11ef-aa83-81fbc7dfef58" 00:15:40.243 ], 00:15:40.243 "product_name": "Malloc disk", 00:15:40.243 "block_size": 512, 00:15:40.243 "num_blocks": 65536, 00:15:40.243 "uuid": "1ba7d3bf-4226-11ef-aa83-81fbc7dfef58", 00:15:40.243 "assigned_rate_limits": { 00:15:40.243 "rw_ios_per_sec": 0, 00:15:40.243 "rw_mbytes_per_sec": 0, 00:15:40.243 "r_mbytes_per_sec": 0, 00:15:40.243 "w_mbytes_per_sec": 0 00:15:40.243 }, 00:15:40.243 "claimed": true, 00:15:40.243 "claim_type": "exclusive_write", 00:15:40.243 "zoned": false, 00:15:40.243 "supported_io_types": { 00:15:40.243 "read": true, 00:15:40.243 "write": true, 00:15:40.243 "unmap": true, 00:15:40.243 "flush": true, 00:15:40.243 "reset": true, 00:15:40.243 "nvme_admin": false, 00:15:40.243 "nvme_io": false, 00:15:40.243 "nvme_io_md": false, 00:15:40.243 "write_zeroes": true, 00:15:40.243 "zcopy": true, 00:15:40.243 "get_zone_info": false, 00:15:40.243 "zone_management": false, 00:15:40.243 "zone_append": false, 00:15:40.243 "compare": false, 00:15:40.243 "compare_and_write": false, 00:15:40.243 "abort": true, 00:15:40.243 "seek_hole": false, 00:15:40.243 "seek_data": false, 00:15:40.243 "copy": true, 00:15:40.243 "nvme_iov_md": false 00:15:40.243 }, 00:15:40.243 "memory_domains": [ 00:15:40.243 { 00:15:40.243 "dma_device_id": "system", 00:15:40.243 "dma_device_type": 1 00:15:40.243 }, 00:15:40.243 { 00:15:40.243 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:40.243 "dma_device_type": 2 00:15:40.243 } 00:15:40.243 ], 00:15:40.243 "driver_specific": {} 00:15:40.243 } 00:15:40.243 ] 00:15:40.243 21:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:40.243 21:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:40.243 21:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:40.243 21:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:40.243 21:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:40.243 21:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:40.243 21:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:40.243 21:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:40.243 21:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:40.243 21:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:40.243 21:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:40.243 21:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.243 21:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:40.502 21:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:40.502 "name": "Existed_Raid", 00:15:40.502 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:40.502 "strip_size_kb": 0, 00:15:40.502 "state": "configuring", 00:15:40.502 "raid_level": "raid1", 00:15:40.502 "superblock": false, 00:15:40.502 "num_base_bdevs": 4, 00:15:40.502 "num_base_bdevs_discovered": 3, 00:15:40.502 "num_base_bdevs_operational": 4, 00:15:40.502 "base_bdevs_list": [ 00:15:40.502 { 00:15:40.502 "name": "BaseBdev1", 00:15:40.502 "uuid": "1ba7d3bf-4226-11ef-aa83-81fbc7dfef58", 00:15:40.502 "is_configured": true, 00:15:40.502 "data_offset": 0, 00:15:40.502 "data_size": 65536 00:15:40.502 }, 00:15:40.502 { 00:15:40.502 "name": null, 00:15:40.502 "uuid": "195b702b-4226-11ef-aa83-81fbc7dfef58", 00:15:40.502 "is_configured": false, 00:15:40.502 "data_offset": 0, 00:15:40.502 "data_size": 65536 00:15:40.502 }, 00:15:40.502 { 00:15:40.502 "name": "BaseBdev3", 00:15:40.502 "uuid": "19cb21fb-4226-11ef-aa83-81fbc7dfef58", 00:15:40.502 "is_configured": true, 00:15:40.502 "data_offset": 0, 00:15:40.502 "data_size": 65536 00:15:40.502 }, 00:15:40.502 { 00:15:40.502 "name": "BaseBdev4", 00:15:40.502 "uuid": "1a2e02d2-4226-11ef-aa83-81fbc7dfef58", 00:15:40.502 "is_configured": true, 00:15:40.502 "data_offset": 0, 00:15:40.502 "data_size": 65536 00:15:40.502 } 00:15:40.502 ] 00:15:40.502 }' 00:15:40.502 21:14:51 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:40.502 21:14:51 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:40.761 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:40.761 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:41.019 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:15:41.019 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:15:41.278 [2024-07-14 21:14:52.690892] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:15:41.278 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:41.278 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:41.278 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:41.278 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:41.278 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:41.278 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:41.278 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:41.278 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:41.278 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:41.278 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:41.278 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.278 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:41.537 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:41.537 "name": "Existed_Raid", 00:15:41.537 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:41.537 "strip_size_kb": 0, 00:15:41.537 "state": "configuring", 00:15:41.537 "raid_level": "raid1", 00:15:41.537 "superblock": false, 00:15:41.537 "num_base_bdevs": 4, 00:15:41.537 "num_base_bdevs_discovered": 2, 00:15:41.537 "num_base_bdevs_operational": 4, 00:15:41.537 "base_bdevs_list": [ 00:15:41.537 { 00:15:41.537 "name": "BaseBdev1", 00:15:41.537 "uuid": "1ba7d3bf-4226-11ef-aa83-81fbc7dfef58", 00:15:41.537 "is_configured": true, 00:15:41.537 "data_offset": 0, 00:15:41.537 "data_size": 65536 00:15:41.537 }, 00:15:41.537 { 00:15:41.537 "name": null, 00:15:41.537 "uuid": "195b702b-4226-11ef-aa83-81fbc7dfef58", 00:15:41.537 "is_configured": false, 00:15:41.537 "data_offset": 0, 00:15:41.537 "data_size": 65536 00:15:41.537 }, 00:15:41.537 { 00:15:41.537 "name": null, 00:15:41.537 "uuid": "19cb21fb-4226-11ef-aa83-81fbc7dfef58", 00:15:41.538 "is_configured": false, 00:15:41.538 "data_offset": 0, 00:15:41.538 "data_size": 65536 00:15:41.538 }, 00:15:41.538 { 00:15:41.538 "name": "BaseBdev4", 00:15:41.538 "uuid": "1a2e02d2-4226-11ef-aa83-81fbc7dfef58", 00:15:41.538 "is_configured": true, 00:15:41.538 "data_offset": 0, 00:15:41.538 "data_size": 65536 00:15:41.538 } 00:15:41.538 ] 00:15:41.538 }' 00:15:41.538 21:14:52 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:41.538 21:14:52 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:41.797 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:41.797 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:42.071 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:15:42.071 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:15:42.357 [2024-07-14 21:14:53.738948] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:42.357 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:42.357 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:42.357 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:42.357 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:42.357 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:42.357 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:42.357 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:42.357 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:42.357 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:42.357 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:42.357 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:42.357 21:14:53 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:42.616 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:42.616 "name": "Existed_Raid", 00:15:42.616 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:42.616 "strip_size_kb": 0, 00:15:42.616 "state": "configuring", 00:15:42.616 "raid_level": "raid1", 00:15:42.616 "superblock": false, 00:15:42.616 "num_base_bdevs": 4, 00:15:42.616 "num_base_bdevs_discovered": 3, 00:15:42.616 "num_base_bdevs_operational": 4, 00:15:42.616 "base_bdevs_list": [ 00:15:42.616 { 00:15:42.616 "name": "BaseBdev1", 00:15:42.616 "uuid": "1ba7d3bf-4226-11ef-aa83-81fbc7dfef58", 00:15:42.616 "is_configured": true, 00:15:42.616 "data_offset": 0, 00:15:42.616 "data_size": 65536 00:15:42.616 }, 00:15:42.616 { 00:15:42.616 "name": null, 00:15:42.616 "uuid": "195b702b-4226-11ef-aa83-81fbc7dfef58", 00:15:42.616 "is_configured": false, 00:15:42.616 "data_offset": 0, 00:15:42.616 "data_size": 65536 00:15:42.616 }, 00:15:42.616 { 00:15:42.616 "name": "BaseBdev3", 00:15:42.616 "uuid": "19cb21fb-4226-11ef-aa83-81fbc7dfef58", 00:15:42.616 "is_configured": true, 00:15:42.616 "data_offset": 0, 00:15:42.616 "data_size": 65536 00:15:42.616 }, 00:15:42.616 { 00:15:42.616 "name": "BaseBdev4", 00:15:42.616 "uuid": "1a2e02d2-4226-11ef-aa83-81fbc7dfef58", 00:15:42.616 "is_configured": true, 00:15:42.616 "data_offset": 0, 00:15:42.616 "data_size": 65536 00:15:42.616 } 00:15:42.616 ] 00:15:42.616 }' 00:15:42.616 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:42.616 21:14:54 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:42.874 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:15:42.874 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.133 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:15:43.133 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:43.392 [2024-07-14 21:14:54.774970] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:43.392 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:43.392 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:43.392 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:43.392 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:43.392 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:43.392 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:43.392 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:43.392 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:43.392 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:43.392 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:43.392 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.392 21:14:54 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:43.651 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:43.651 "name": "Existed_Raid", 00:15:43.651 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:43.651 "strip_size_kb": 0, 00:15:43.651 "state": "configuring", 00:15:43.651 "raid_level": "raid1", 00:15:43.651 "superblock": false, 00:15:43.651 "num_base_bdevs": 4, 00:15:43.651 "num_base_bdevs_discovered": 2, 00:15:43.651 "num_base_bdevs_operational": 4, 00:15:43.651 "base_bdevs_list": [ 00:15:43.651 { 00:15:43.651 "name": null, 00:15:43.651 "uuid": "1ba7d3bf-4226-11ef-aa83-81fbc7dfef58", 00:15:43.651 "is_configured": false, 00:15:43.651 "data_offset": 0, 00:15:43.651 "data_size": 65536 00:15:43.651 }, 00:15:43.651 { 00:15:43.651 "name": null, 00:15:43.651 "uuid": "195b702b-4226-11ef-aa83-81fbc7dfef58", 00:15:43.651 "is_configured": false, 00:15:43.651 "data_offset": 0, 00:15:43.651 "data_size": 65536 00:15:43.651 }, 00:15:43.651 { 00:15:43.651 "name": "BaseBdev3", 00:15:43.651 "uuid": "19cb21fb-4226-11ef-aa83-81fbc7dfef58", 00:15:43.651 "is_configured": true, 00:15:43.651 "data_offset": 0, 00:15:43.651 "data_size": 65536 00:15:43.651 }, 00:15:43.651 { 00:15:43.651 "name": "BaseBdev4", 00:15:43.651 "uuid": "1a2e02d2-4226-11ef-aa83-81fbc7dfef58", 00:15:43.651 "is_configured": true, 00:15:43.651 "data_offset": 0, 00:15:43.651 "data_size": 65536 00:15:43.651 } 00:15:43.651 ] 00:15:43.651 }' 00:15:43.651 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:43.651 21:14:55 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:43.910 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:43.910 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:15:44.169 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:15:44.169 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:15:44.428 [2024-07-14 21:14:55.817206] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:44.428 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:44.428 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:44.428 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:44.428 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:44.428 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:44.428 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:44.428 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:44.428 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:44.428 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:44.428 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:44.428 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:44.428 21:14:55 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.688 21:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:44.688 "name": "Existed_Raid", 00:15:44.688 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:44.688 "strip_size_kb": 0, 00:15:44.688 "state": "configuring", 00:15:44.688 "raid_level": "raid1", 00:15:44.688 "superblock": false, 00:15:44.688 "num_base_bdevs": 4, 00:15:44.688 "num_base_bdevs_discovered": 3, 00:15:44.688 "num_base_bdevs_operational": 4, 00:15:44.688 "base_bdevs_list": [ 00:15:44.688 { 00:15:44.688 "name": null, 00:15:44.688 "uuid": "1ba7d3bf-4226-11ef-aa83-81fbc7dfef58", 00:15:44.688 "is_configured": false, 00:15:44.688 "data_offset": 0, 00:15:44.688 "data_size": 65536 00:15:44.688 }, 00:15:44.688 { 00:15:44.688 "name": "BaseBdev2", 00:15:44.688 "uuid": "195b702b-4226-11ef-aa83-81fbc7dfef58", 00:15:44.688 "is_configured": true, 00:15:44.688 "data_offset": 0, 00:15:44.688 "data_size": 65536 00:15:44.688 }, 00:15:44.688 { 00:15:44.688 "name": "BaseBdev3", 00:15:44.688 "uuid": "19cb21fb-4226-11ef-aa83-81fbc7dfef58", 00:15:44.688 "is_configured": true, 00:15:44.688 "data_offset": 0, 00:15:44.688 "data_size": 65536 00:15:44.688 }, 00:15:44.688 { 00:15:44.688 "name": "BaseBdev4", 00:15:44.688 "uuid": "1a2e02d2-4226-11ef-aa83-81fbc7dfef58", 00:15:44.688 "is_configured": true, 00:15:44.688 "data_offset": 0, 00:15:44.688 "data_size": 65536 00:15:44.688 } 00:15:44.688 ] 00:15:44.688 }' 00:15:44.688 21:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:44.688 21:14:56 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:44.947 21:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:44.947 21:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:15:45.206 21:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:15:45.206 21:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:45.206 21:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:15:45.465 21:14:56 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 1ba7d3bf-4226-11ef-aa83-81fbc7dfef58 00:15:45.724 [2024-07-14 21:14:57.017412] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:15:45.724 [2024-07-14 21:14:57.017437] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x31ea12234f00 00:15:45.724 [2024-07-14 21:14:57.017457] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 65536, blocklen 512 00:15:45.724 [2024-07-14 21:14:57.017479] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x31ea12297e20 00:15:45.724 [2024-07-14 21:14:57.017547] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x31ea12234f00 00:15:45.724 [2024-07-14 21:14:57.017551] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x31ea12234f00 00:15:45.724 [2024-07-14 21:14:57.017582] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:45.724 NewBaseBdev 00:15:45.724 21:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:15:45.724 21:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:15:45.724 21:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:45.724 21:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@899 -- # local i 00:15:45.724 21:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:45.724 21:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:45.724 21:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:45.983 21:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:15:46.242 [ 00:15:46.242 { 00:15:46.242 "name": "NewBaseBdev", 00:15:46.242 "aliases": [ 00:15:46.242 "1ba7d3bf-4226-11ef-aa83-81fbc7dfef58" 00:15:46.242 ], 00:15:46.242 "product_name": "Malloc disk", 00:15:46.242 "block_size": 512, 00:15:46.242 "num_blocks": 65536, 00:15:46.242 "uuid": "1ba7d3bf-4226-11ef-aa83-81fbc7dfef58", 00:15:46.242 "assigned_rate_limits": { 00:15:46.242 "rw_ios_per_sec": 0, 00:15:46.242 "rw_mbytes_per_sec": 0, 00:15:46.242 "r_mbytes_per_sec": 0, 00:15:46.242 "w_mbytes_per_sec": 0 00:15:46.242 }, 00:15:46.242 "claimed": true, 00:15:46.242 "claim_type": "exclusive_write", 00:15:46.242 "zoned": false, 00:15:46.242 "supported_io_types": { 00:15:46.242 "read": true, 00:15:46.242 "write": true, 00:15:46.242 "unmap": true, 00:15:46.242 "flush": true, 00:15:46.242 "reset": true, 00:15:46.242 "nvme_admin": false, 00:15:46.242 "nvme_io": false, 00:15:46.242 "nvme_io_md": false, 00:15:46.242 "write_zeroes": true, 00:15:46.242 "zcopy": true, 00:15:46.242 "get_zone_info": false, 00:15:46.242 "zone_management": false, 00:15:46.242 "zone_append": false, 00:15:46.242 "compare": false, 00:15:46.242 "compare_and_write": false, 00:15:46.242 "abort": true, 00:15:46.242 "seek_hole": false, 00:15:46.242 "seek_data": false, 00:15:46.242 "copy": true, 00:15:46.242 "nvme_iov_md": false 00:15:46.242 }, 00:15:46.242 "memory_domains": [ 00:15:46.242 { 00:15:46.242 "dma_device_id": "system", 00:15:46.242 "dma_device_type": 1 00:15:46.242 }, 00:15:46.242 { 00:15:46.242 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:46.242 "dma_device_type": 2 00:15:46.242 } 00:15:46.242 ], 00:15:46.242 "driver_specific": {} 00:15:46.242 } 00:15:46.242 ] 00:15:46.242 21:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@905 -- # return 0 00:15:46.242 21:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:46.242 21:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:46.242 21:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:46.242 21:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:46.242 21:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:46.242 21:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:46.242 21:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:46.242 21:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:46.242 21:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:46.242 21:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:46.242 21:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:46.242 21:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:46.501 21:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:46.501 "name": "Existed_Raid", 00:15:46.501 "uuid": "1f2048af-4226-11ef-aa83-81fbc7dfef58", 00:15:46.501 "strip_size_kb": 0, 00:15:46.501 "state": "online", 00:15:46.501 "raid_level": "raid1", 00:15:46.501 "superblock": false, 00:15:46.501 "num_base_bdevs": 4, 00:15:46.501 "num_base_bdevs_discovered": 4, 00:15:46.501 "num_base_bdevs_operational": 4, 00:15:46.501 "base_bdevs_list": [ 00:15:46.501 { 00:15:46.501 "name": "NewBaseBdev", 00:15:46.501 "uuid": "1ba7d3bf-4226-11ef-aa83-81fbc7dfef58", 00:15:46.501 "is_configured": true, 00:15:46.501 "data_offset": 0, 00:15:46.501 "data_size": 65536 00:15:46.501 }, 00:15:46.501 { 00:15:46.501 "name": "BaseBdev2", 00:15:46.501 "uuid": "195b702b-4226-11ef-aa83-81fbc7dfef58", 00:15:46.501 "is_configured": true, 00:15:46.501 "data_offset": 0, 00:15:46.501 "data_size": 65536 00:15:46.501 }, 00:15:46.501 { 00:15:46.501 "name": "BaseBdev3", 00:15:46.501 "uuid": "19cb21fb-4226-11ef-aa83-81fbc7dfef58", 00:15:46.501 "is_configured": true, 00:15:46.501 "data_offset": 0, 00:15:46.501 "data_size": 65536 00:15:46.501 }, 00:15:46.501 { 00:15:46.501 "name": "BaseBdev4", 00:15:46.501 "uuid": "1a2e02d2-4226-11ef-aa83-81fbc7dfef58", 00:15:46.501 "is_configured": true, 00:15:46.501 "data_offset": 0, 00:15:46.501 "data_size": 65536 00:15:46.501 } 00:15:46.501 ] 00:15:46.501 }' 00:15:46.501 21:14:57 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:46.501 21:14:57 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:46.760 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:15:46.760 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:46.760 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:46.760 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:46.760 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:46.760 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@198 -- # local name 00:15:46.760 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:46.760 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:47.019 [2024-07-14 21:14:58.345332] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:47.019 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:47.019 "name": "Existed_Raid", 00:15:47.019 "aliases": [ 00:15:47.019 "1f2048af-4226-11ef-aa83-81fbc7dfef58" 00:15:47.019 ], 00:15:47.019 "product_name": "Raid Volume", 00:15:47.019 "block_size": 512, 00:15:47.019 "num_blocks": 65536, 00:15:47.019 "uuid": "1f2048af-4226-11ef-aa83-81fbc7dfef58", 00:15:47.019 "assigned_rate_limits": { 00:15:47.019 "rw_ios_per_sec": 0, 00:15:47.019 "rw_mbytes_per_sec": 0, 00:15:47.019 "r_mbytes_per_sec": 0, 00:15:47.020 "w_mbytes_per_sec": 0 00:15:47.020 }, 00:15:47.020 "claimed": false, 00:15:47.020 "zoned": false, 00:15:47.020 "supported_io_types": { 00:15:47.020 "read": true, 00:15:47.020 "write": true, 00:15:47.020 "unmap": false, 00:15:47.020 "flush": false, 00:15:47.020 "reset": true, 00:15:47.020 "nvme_admin": false, 00:15:47.020 "nvme_io": false, 00:15:47.020 "nvme_io_md": false, 00:15:47.020 "write_zeroes": true, 00:15:47.020 "zcopy": false, 00:15:47.020 "get_zone_info": false, 00:15:47.020 "zone_management": false, 00:15:47.020 "zone_append": false, 00:15:47.020 "compare": false, 00:15:47.020 "compare_and_write": false, 00:15:47.020 "abort": false, 00:15:47.020 "seek_hole": false, 00:15:47.020 "seek_data": false, 00:15:47.020 "copy": false, 00:15:47.020 "nvme_iov_md": false 00:15:47.020 }, 00:15:47.020 "memory_domains": [ 00:15:47.020 { 00:15:47.020 "dma_device_id": "system", 00:15:47.020 "dma_device_type": 1 00:15:47.020 }, 00:15:47.020 { 00:15:47.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.020 "dma_device_type": 2 00:15:47.020 }, 00:15:47.020 { 00:15:47.020 "dma_device_id": "system", 00:15:47.020 "dma_device_type": 1 00:15:47.020 }, 00:15:47.020 { 00:15:47.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.020 "dma_device_type": 2 00:15:47.020 }, 00:15:47.020 { 00:15:47.020 "dma_device_id": "system", 00:15:47.020 "dma_device_type": 1 00:15:47.020 }, 00:15:47.020 { 00:15:47.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.020 "dma_device_type": 2 00:15:47.020 }, 00:15:47.020 { 00:15:47.020 "dma_device_id": "system", 00:15:47.020 "dma_device_type": 1 00:15:47.020 }, 00:15:47.020 { 00:15:47.020 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.020 "dma_device_type": 2 00:15:47.020 } 00:15:47.020 ], 00:15:47.020 "driver_specific": { 00:15:47.020 "raid": { 00:15:47.020 "uuid": "1f2048af-4226-11ef-aa83-81fbc7dfef58", 00:15:47.020 "strip_size_kb": 0, 00:15:47.020 "state": "online", 00:15:47.020 "raid_level": "raid1", 00:15:47.020 "superblock": false, 00:15:47.020 "num_base_bdevs": 4, 00:15:47.020 "num_base_bdevs_discovered": 4, 00:15:47.020 "num_base_bdevs_operational": 4, 00:15:47.020 "base_bdevs_list": [ 00:15:47.020 { 00:15:47.020 "name": "NewBaseBdev", 00:15:47.020 "uuid": "1ba7d3bf-4226-11ef-aa83-81fbc7dfef58", 00:15:47.020 "is_configured": true, 00:15:47.020 "data_offset": 0, 00:15:47.020 "data_size": 65536 00:15:47.020 }, 00:15:47.020 { 00:15:47.020 "name": "BaseBdev2", 00:15:47.020 "uuid": "195b702b-4226-11ef-aa83-81fbc7dfef58", 00:15:47.020 "is_configured": true, 00:15:47.020 "data_offset": 0, 00:15:47.020 "data_size": 65536 00:15:47.020 }, 00:15:47.020 { 00:15:47.020 "name": "BaseBdev3", 00:15:47.020 "uuid": "19cb21fb-4226-11ef-aa83-81fbc7dfef58", 00:15:47.020 "is_configured": true, 00:15:47.020 "data_offset": 0, 00:15:47.020 "data_size": 65536 00:15:47.020 }, 00:15:47.020 { 00:15:47.020 "name": "BaseBdev4", 00:15:47.020 "uuid": "1a2e02d2-4226-11ef-aa83-81fbc7dfef58", 00:15:47.020 "is_configured": true, 00:15:47.020 "data_offset": 0, 00:15:47.020 "data_size": 65536 00:15:47.020 } 00:15:47.020 ] 00:15:47.020 } 00:15:47.020 } 00:15:47.020 }' 00:15:47.020 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:47.020 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:15:47.020 BaseBdev2 00:15:47.020 BaseBdev3 00:15:47.020 BaseBdev4' 00:15:47.020 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:47.020 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:15:47.020 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:47.279 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:47.279 "name": "NewBaseBdev", 00:15:47.279 "aliases": [ 00:15:47.279 "1ba7d3bf-4226-11ef-aa83-81fbc7dfef58" 00:15:47.279 ], 00:15:47.279 "product_name": "Malloc disk", 00:15:47.279 "block_size": 512, 00:15:47.279 "num_blocks": 65536, 00:15:47.279 "uuid": "1ba7d3bf-4226-11ef-aa83-81fbc7dfef58", 00:15:47.279 "assigned_rate_limits": { 00:15:47.279 "rw_ios_per_sec": 0, 00:15:47.279 "rw_mbytes_per_sec": 0, 00:15:47.279 "r_mbytes_per_sec": 0, 00:15:47.279 "w_mbytes_per_sec": 0 00:15:47.279 }, 00:15:47.279 "claimed": true, 00:15:47.279 "claim_type": "exclusive_write", 00:15:47.279 "zoned": false, 00:15:47.279 "supported_io_types": { 00:15:47.279 "read": true, 00:15:47.279 "write": true, 00:15:47.279 "unmap": true, 00:15:47.279 "flush": true, 00:15:47.279 "reset": true, 00:15:47.279 "nvme_admin": false, 00:15:47.279 "nvme_io": false, 00:15:47.279 "nvme_io_md": false, 00:15:47.279 "write_zeroes": true, 00:15:47.279 "zcopy": true, 00:15:47.279 "get_zone_info": false, 00:15:47.279 "zone_management": false, 00:15:47.279 "zone_append": false, 00:15:47.279 "compare": false, 00:15:47.279 "compare_and_write": false, 00:15:47.279 "abort": true, 00:15:47.279 "seek_hole": false, 00:15:47.279 "seek_data": false, 00:15:47.279 "copy": true, 00:15:47.279 "nvme_iov_md": false 00:15:47.279 }, 00:15:47.279 "memory_domains": [ 00:15:47.279 { 00:15:47.279 "dma_device_id": "system", 00:15:47.279 "dma_device_type": 1 00:15:47.279 }, 00:15:47.279 { 00:15:47.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.279 "dma_device_type": 2 00:15:47.279 } 00:15:47.279 ], 00:15:47.279 "driver_specific": {} 00:15:47.279 }' 00:15:47.279 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.280 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.280 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:47.280 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.280 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.280 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:47.280 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.280 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.280 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:47.280 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.280 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.280 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:47.280 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:47.280 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:47.280 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:47.539 "name": "BaseBdev2", 00:15:47.539 "aliases": [ 00:15:47.539 "195b702b-4226-11ef-aa83-81fbc7dfef58" 00:15:47.539 ], 00:15:47.539 "product_name": "Malloc disk", 00:15:47.539 "block_size": 512, 00:15:47.539 "num_blocks": 65536, 00:15:47.539 "uuid": "195b702b-4226-11ef-aa83-81fbc7dfef58", 00:15:47.539 "assigned_rate_limits": { 00:15:47.539 "rw_ios_per_sec": 0, 00:15:47.539 "rw_mbytes_per_sec": 0, 00:15:47.539 "r_mbytes_per_sec": 0, 00:15:47.539 "w_mbytes_per_sec": 0 00:15:47.539 }, 00:15:47.539 "claimed": true, 00:15:47.539 "claim_type": "exclusive_write", 00:15:47.539 "zoned": false, 00:15:47.539 "supported_io_types": { 00:15:47.539 "read": true, 00:15:47.539 "write": true, 00:15:47.539 "unmap": true, 00:15:47.539 "flush": true, 00:15:47.539 "reset": true, 00:15:47.539 "nvme_admin": false, 00:15:47.539 "nvme_io": false, 00:15:47.539 "nvme_io_md": false, 00:15:47.539 "write_zeroes": true, 00:15:47.539 "zcopy": true, 00:15:47.539 "get_zone_info": false, 00:15:47.539 "zone_management": false, 00:15:47.539 "zone_append": false, 00:15:47.539 "compare": false, 00:15:47.539 "compare_and_write": false, 00:15:47.539 "abort": true, 00:15:47.539 "seek_hole": false, 00:15:47.539 "seek_data": false, 00:15:47.539 "copy": true, 00:15:47.539 "nvme_iov_md": false 00:15:47.539 }, 00:15:47.539 "memory_domains": [ 00:15:47.539 { 00:15:47.539 "dma_device_id": "system", 00:15:47.539 "dma_device_type": 1 00:15:47.539 }, 00:15:47.539 { 00:15:47.539 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.539 "dma_device_type": 2 00:15:47.539 } 00:15:47.539 ], 00:15:47.539 "driver_specific": {} 00:15:47.539 }' 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:47.539 21:14:58 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:47.798 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:47.798 "name": "BaseBdev3", 00:15:47.798 "aliases": [ 00:15:47.798 "19cb21fb-4226-11ef-aa83-81fbc7dfef58" 00:15:47.798 ], 00:15:47.798 "product_name": "Malloc disk", 00:15:47.798 "block_size": 512, 00:15:47.798 "num_blocks": 65536, 00:15:47.799 "uuid": "19cb21fb-4226-11ef-aa83-81fbc7dfef58", 00:15:47.799 "assigned_rate_limits": { 00:15:47.799 "rw_ios_per_sec": 0, 00:15:47.799 "rw_mbytes_per_sec": 0, 00:15:47.799 "r_mbytes_per_sec": 0, 00:15:47.799 "w_mbytes_per_sec": 0 00:15:47.799 }, 00:15:47.799 "claimed": true, 00:15:47.799 "claim_type": "exclusive_write", 00:15:47.799 "zoned": false, 00:15:47.799 "supported_io_types": { 00:15:47.799 "read": true, 00:15:47.799 "write": true, 00:15:47.799 "unmap": true, 00:15:47.799 "flush": true, 00:15:47.799 "reset": true, 00:15:47.799 "nvme_admin": false, 00:15:47.799 "nvme_io": false, 00:15:47.799 "nvme_io_md": false, 00:15:47.799 "write_zeroes": true, 00:15:47.799 "zcopy": true, 00:15:47.799 "get_zone_info": false, 00:15:47.799 "zone_management": false, 00:15:47.799 "zone_append": false, 00:15:47.799 "compare": false, 00:15:47.799 "compare_and_write": false, 00:15:47.799 "abort": true, 00:15:47.799 "seek_hole": false, 00:15:47.799 "seek_data": false, 00:15:47.799 "copy": true, 00:15:47.799 "nvme_iov_md": false 00:15:47.799 }, 00:15:47.799 "memory_domains": [ 00:15:47.799 { 00:15:47.799 "dma_device_id": "system", 00:15:47.799 "dma_device_type": 1 00:15:47.799 }, 00:15:47.799 { 00:15:47.799 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:47.799 "dma_device_type": 2 00:15:47.799 } 00:15:47.799 ], 00:15:47.799 "driver_specific": {} 00:15:47.799 }' 00:15:47.799 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.799 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:47.799 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:47.799 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.799 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:47.799 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:47.799 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.799 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:47.799 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:47.799 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.799 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:47.799 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:47.799 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:47.799 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:47.799 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:48.057 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:48.057 "name": "BaseBdev4", 00:15:48.057 "aliases": [ 00:15:48.057 "1a2e02d2-4226-11ef-aa83-81fbc7dfef58" 00:15:48.057 ], 00:15:48.057 "product_name": "Malloc disk", 00:15:48.058 "block_size": 512, 00:15:48.058 "num_blocks": 65536, 00:15:48.058 "uuid": "1a2e02d2-4226-11ef-aa83-81fbc7dfef58", 00:15:48.058 "assigned_rate_limits": { 00:15:48.058 "rw_ios_per_sec": 0, 00:15:48.058 "rw_mbytes_per_sec": 0, 00:15:48.058 "r_mbytes_per_sec": 0, 00:15:48.058 "w_mbytes_per_sec": 0 00:15:48.058 }, 00:15:48.058 "claimed": true, 00:15:48.058 "claim_type": "exclusive_write", 00:15:48.058 "zoned": false, 00:15:48.058 "supported_io_types": { 00:15:48.058 "read": true, 00:15:48.058 "write": true, 00:15:48.058 "unmap": true, 00:15:48.058 "flush": true, 00:15:48.058 "reset": true, 00:15:48.058 "nvme_admin": false, 00:15:48.058 "nvme_io": false, 00:15:48.058 "nvme_io_md": false, 00:15:48.058 "write_zeroes": true, 00:15:48.058 "zcopy": true, 00:15:48.058 "get_zone_info": false, 00:15:48.058 "zone_management": false, 00:15:48.058 "zone_append": false, 00:15:48.058 "compare": false, 00:15:48.058 "compare_and_write": false, 00:15:48.058 "abort": true, 00:15:48.058 "seek_hole": false, 00:15:48.058 "seek_data": false, 00:15:48.058 "copy": true, 00:15:48.058 "nvme_iov_md": false 00:15:48.058 }, 00:15:48.058 "memory_domains": [ 00:15:48.058 { 00:15:48.058 "dma_device_id": "system", 00:15:48.058 "dma_device_type": 1 00:15:48.058 }, 00:15:48.058 { 00:15:48.058 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:48.058 "dma_device_type": 2 00:15:48.058 } 00:15:48.058 ], 00:15:48.058 "driver_specific": {} 00:15:48.058 }' 00:15:48.058 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.058 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:48.058 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:48.058 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.058 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:48.058 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:48.058 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.058 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:48.058 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:48.058 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.058 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:48.058 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:48.058 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:48.316 [2024-07-14 21:14:59.861367] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:48.316 [2024-07-14 21:14:59.861387] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:15:48.316 [2024-07-14 21:14:59.861425] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:15:48.316 [2024-07-14 21:14:59.861492] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:15:48.316 [2024-07-14 21:14:59.861496] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x31ea12234f00 name Existed_Raid, state offline 00:15:48.576 21:14:59 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@341 -- # killprocess 62874 00:15:48.576 21:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@948 -- # '[' -z 62874 ']' 00:15:48.576 21:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@952 -- # kill -0 62874 00:15:48.576 21:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # uname 00:15:48.576 21:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:15:48.576 21:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # ps -c -o command 62874 00:15:48.576 21:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # tail -1 00:15:48.576 21:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:15:48.576 killing process with pid 62874 00:15:48.576 21:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:15:48.576 21:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 62874' 00:15:48.576 21:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@967 -- # kill 62874 00:15:48.576 [2024-07-14 21:14:59.888156] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:15:48.576 21:14:59 bdev_raid.raid_state_function_test -- common/autotest_common.sh@972 -- # wait 62874 00:15:48.576 [2024-07-14 21:14:59.912404] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:15:48.576 21:15:00 bdev_raid.raid_state_function_test -- bdev/bdev_raid.sh@343 -- # return 0 00:15:48.576 00:15:48.576 real 0m25.390s 00:15:48.576 user 0m46.258s 00:15:48.576 sys 0m3.694s 00:15:48.576 21:15:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:48.576 ************************************ 00:15:48.576 END TEST raid_state_function_test 00:15:48.576 ************************************ 00:15:48.576 21:15:00 bdev_raid.raid_state_function_test -- common/autotest_common.sh@10 -- # set +x 00:15:48.835 21:15:00 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:15:48.835 21:15:00 bdev_raid -- bdev/bdev_raid.sh@868 -- # run_test raid_state_function_test_sb raid_state_function_test raid1 4 true 00:15:48.835 21:15:00 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:15:48.835 21:15:00 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:48.835 21:15:00 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:15:48.835 ************************************ 00:15:48.835 START TEST raid_state_function_test_sb 00:15:48.835 ************************************ 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 4 true 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=4 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev3 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # echo BaseBdev4 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@226 -- # local strip_size 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@244 -- # raid_pid=63685 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 63685' 00:15:48.835 Process raid pid: 63685 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@246 -- # waitforlisten 63685 /var/tmp/spdk-raid.sock 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@829 -- # '[' -z 63685 ']' 00:15:48.835 21:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:15:48.836 21:15:00 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:15:48.836 21:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:48.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:15:48.836 21:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:15:48.836 21:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:48.836 21:15:00 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:48.836 [2024-07-14 21:15:00.170354] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:15:48.836 [2024-07-14 21:15:00.170524] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:15:49.403 EAL: TSC is not safe to use in SMP mode 00:15:49.403 EAL: TSC is not invariant 00:15:49.403 [2024-07-14 21:15:00.712120] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:49.403 [2024-07-14 21:15:00.796940] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:15:49.403 [2024-07-14 21:15:00.799386] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:49.403 [2024-07-14 21:15:00.800331] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.403 [2024-07-14 21:15:00.800359] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:15:49.661 21:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:49.661 21:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@862 -- # return 0 00:15:49.661 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:49.919 [2024-07-14 21:15:01.336449] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:49.919 [2024-07-14 21:15:01.336516] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:49.919 [2024-07-14 21:15:01.336521] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:49.919 [2024-07-14 21:15:01.336544] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:49.919 [2024-07-14 21:15:01.336547] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:49.919 [2024-07-14 21:15:01.336553] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:49.919 [2024-07-14 21:15:01.336556] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:49.919 [2024-07-14 21:15:01.336562] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:49.919 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:49.919 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:49.919 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:49.919 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:49.919 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:49.919 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:49.919 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:49.919 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:49.919 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:49.919 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:49.919 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:49.919 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:50.177 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:50.177 "name": "Existed_Raid", 00:15:50.177 "uuid": "21b34ece-4226-11ef-aa83-81fbc7dfef58", 00:15:50.177 "strip_size_kb": 0, 00:15:50.177 "state": "configuring", 00:15:50.177 "raid_level": "raid1", 00:15:50.177 "superblock": true, 00:15:50.177 "num_base_bdevs": 4, 00:15:50.177 "num_base_bdevs_discovered": 0, 00:15:50.177 "num_base_bdevs_operational": 4, 00:15:50.177 "base_bdevs_list": [ 00:15:50.177 { 00:15:50.177 "name": "BaseBdev1", 00:15:50.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.177 "is_configured": false, 00:15:50.177 "data_offset": 0, 00:15:50.177 "data_size": 0 00:15:50.177 }, 00:15:50.177 { 00:15:50.177 "name": "BaseBdev2", 00:15:50.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.177 "is_configured": false, 00:15:50.177 "data_offset": 0, 00:15:50.177 "data_size": 0 00:15:50.177 }, 00:15:50.177 { 00:15:50.177 "name": "BaseBdev3", 00:15:50.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.177 "is_configured": false, 00:15:50.177 "data_offset": 0, 00:15:50.177 "data_size": 0 00:15:50.177 }, 00:15:50.177 { 00:15:50.177 "name": "BaseBdev4", 00:15:50.177 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:50.177 "is_configured": false, 00:15:50.177 "data_offset": 0, 00:15:50.177 "data_size": 0 00:15:50.177 } 00:15:50.177 ] 00:15:50.177 }' 00:15:50.177 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:50.177 21:15:01 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:50.435 21:15:01 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:50.692 [2024-07-14 21:15:02.112511] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:50.692 [2024-07-14 21:15:02.112537] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22f54b834500 name Existed_Raid, state configuring 00:15:50.692 21:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:50.950 [2024-07-14 21:15:02.368555] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:15:50.951 [2024-07-14 21:15:02.368615] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:15:50.951 [2024-07-14 21:15:02.368620] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:50.951 [2024-07-14 21:15:02.368642] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:50.951 [2024-07-14 21:15:02.368645] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:50.951 [2024-07-14 21:15:02.368651] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:50.951 [2024-07-14 21:15:02.368654] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:50.951 [2024-07-14 21:15:02.368660] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:50.951 21:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:15:51.209 [2024-07-14 21:15:02.577473] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:51.209 BaseBdev1 00:15:51.209 21:15:02 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:15:51.209 21:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:15:51.209 21:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:51.209 21:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:51.209 21:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:51.209 21:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:51.209 21:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:51.468 21:15:02 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:15:51.727 [ 00:15:51.727 { 00:15:51.727 "name": "BaseBdev1", 00:15:51.727 "aliases": [ 00:15:51.727 "22708901-4226-11ef-aa83-81fbc7dfef58" 00:15:51.727 ], 00:15:51.727 "product_name": "Malloc disk", 00:15:51.727 "block_size": 512, 00:15:51.727 "num_blocks": 65536, 00:15:51.727 "uuid": "22708901-4226-11ef-aa83-81fbc7dfef58", 00:15:51.727 "assigned_rate_limits": { 00:15:51.727 "rw_ios_per_sec": 0, 00:15:51.727 "rw_mbytes_per_sec": 0, 00:15:51.727 "r_mbytes_per_sec": 0, 00:15:51.727 "w_mbytes_per_sec": 0 00:15:51.727 }, 00:15:51.727 "claimed": true, 00:15:51.727 "claim_type": "exclusive_write", 00:15:51.727 "zoned": false, 00:15:51.727 "supported_io_types": { 00:15:51.727 "read": true, 00:15:51.727 "write": true, 00:15:51.727 "unmap": true, 00:15:51.727 "flush": true, 00:15:51.727 "reset": true, 00:15:51.727 "nvme_admin": false, 00:15:51.727 "nvme_io": false, 00:15:51.727 "nvme_io_md": false, 00:15:51.727 "write_zeroes": true, 00:15:51.727 "zcopy": true, 00:15:51.727 "get_zone_info": false, 00:15:51.727 "zone_management": false, 00:15:51.727 "zone_append": false, 00:15:51.727 "compare": false, 00:15:51.727 "compare_and_write": false, 00:15:51.727 "abort": true, 00:15:51.727 "seek_hole": false, 00:15:51.727 "seek_data": false, 00:15:51.727 "copy": true, 00:15:51.727 "nvme_iov_md": false 00:15:51.727 }, 00:15:51.727 "memory_domains": [ 00:15:51.727 { 00:15:51.727 "dma_device_id": "system", 00:15:51.727 "dma_device_type": 1 00:15:51.727 }, 00:15:51.727 { 00:15:51.727 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:51.727 "dma_device_type": 2 00:15:51.727 } 00:15:51.727 ], 00:15:51.727 "driver_specific": {} 00:15:51.727 } 00:15:51.727 ] 00:15:51.727 21:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:51.727 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:51.727 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:51.727 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:51.727 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:51.727 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:51.727 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:51.727 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:51.727 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:51.727 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:51.727 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:51.727 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:51.727 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:51.985 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:51.985 "name": "Existed_Raid", 00:15:51.985 "uuid": "2250cb82-4226-11ef-aa83-81fbc7dfef58", 00:15:51.985 "strip_size_kb": 0, 00:15:51.985 "state": "configuring", 00:15:51.985 "raid_level": "raid1", 00:15:51.985 "superblock": true, 00:15:51.985 "num_base_bdevs": 4, 00:15:51.985 "num_base_bdevs_discovered": 1, 00:15:51.985 "num_base_bdevs_operational": 4, 00:15:51.985 "base_bdevs_list": [ 00:15:51.985 { 00:15:51.985 "name": "BaseBdev1", 00:15:51.985 "uuid": "22708901-4226-11ef-aa83-81fbc7dfef58", 00:15:51.985 "is_configured": true, 00:15:51.985 "data_offset": 2048, 00:15:51.985 "data_size": 63488 00:15:51.985 }, 00:15:51.985 { 00:15:51.985 "name": "BaseBdev2", 00:15:51.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.985 "is_configured": false, 00:15:51.985 "data_offset": 0, 00:15:51.985 "data_size": 0 00:15:51.985 }, 00:15:51.985 { 00:15:51.985 "name": "BaseBdev3", 00:15:51.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.985 "is_configured": false, 00:15:51.985 "data_offset": 0, 00:15:51.985 "data_size": 0 00:15:51.985 }, 00:15:51.985 { 00:15:51.985 "name": "BaseBdev4", 00:15:51.985 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:51.985 "is_configured": false, 00:15:51.985 "data_offset": 0, 00:15:51.985 "data_size": 0 00:15:51.985 } 00:15:51.985 ] 00:15:51.985 }' 00:15:51.985 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:51.985 21:15:03 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:52.242 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:15:52.499 [2024-07-14 21:15:03.840599] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:15:52.499 [2024-07-14 21:15:03.840639] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22f54b834500 name Existed_Raid, state configuring 00:15:52.499 21:15:03 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:15:52.756 [2024-07-14 21:15:04.056671] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:15:52.756 [2024-07-14 21:15:04.057615] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:15:52.756 [2024-07-14 21:15:04.057665] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:15:52.756 [2024-07-14 21:15:04.057670] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev3 00:15:52.756 [2024-07-14 21:15:04.057694] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev3 doesn't exist now 00:15:52.756 [2024-07-14 21:15:04.057697] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev4 00:15:52.756 [2024-07-14 21:15:04.057704] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev4 doesn't exist now 00:15:52.756 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:15:52.756 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:52.756 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:52.756 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:52.756 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:52.756 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:52.756 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:52.756 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:52.756 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:52.756 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:52.756 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:52.756 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:52.756 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:52.756 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:53.014 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:53.014 "name": "Existed_Raid", 00:15:53.014 "uuid": "23526160-4226-11ef-aa83-81fbc7dfef58", 00:15:53.014 "strip_size_kb": 0, 00:15:53.014 "state": "configuring", 00:15:53.014 "raid_level": "raid1", 00:15:53.014 "superblock": true, 00:15:53.014 "num_base_bdevs": 4, 00:15:53.014 "num_base_bdevs_discovered": 1, 00:15:53.014 "num_base_bdevs_operational": 4, 00:15:53.014 "base_bdevs_list": [ 00:15:53.014 { 00:15:53.014 "name": "BaseBdev1", 00:15:53.014 "uuid": "22708901-4226-11ef-aa83-81fbc7dfef58", 00:15:53.014 "is_configured": true, 00:15:53.014 "data_offset": 2048, 00:15:53.014 "data_size": 63488 00:15:53.014 }, 00:15:53.014 { 00:15:53.014 "name": "BaseBdev2", 00:15:53.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.014 "is_configured": false, 00:15:53.014 "data_offset": 0, 00:15:53.014 "data_size": 0 00:15:53.014 }, 00:15:53.014 { 00:15:53.014 "name": "BaseBdev3", 00:15:53.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.014 "is_configured": false, 00:15:53.014 "data_offset": 0, 00:15:53.014 "data_size": 0 00:15:53.014 }, 00:15:53.014 { 00:15:53.014 "name": "BaseBdev4", 00:15:53.014 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:53.014 "is_configured": false, 00:15:53.014 "data_offset": 0, 00:15:53.014 "data_size": 0 00:15:53.014 } 00:15:53.014 ] 00:15:53.014 }' 00:15:53.014 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:53.014 21:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:53.272 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:15:53.529 [2024-07-14 21:15:04.876776] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:15:53.529 BaseBdev2 00:15:53.529 21:15:04 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:15:53.529 21:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:15:53.529 21:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:53.529 21:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:53.529 21:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:53.529 21:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:53.529 21:15:04 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:53.787 21:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:15:54.045 [ 00:15:54.045 { 00:15:54.045 "name": "BaseBdev2", 00:15:54.045 "aliases": [ 00:15:54.045 "23cf804f-4226-11ef-aa83-81fbc7dfef58" 00:15:54.045 ], 00:15:54.045 "product_name": "Malloc disk", 00:15:54.045 "block_size": 512, 00:15:54.045 "num_blocks": 65536, 00:15:54.045 "uuid": "23cf804f-4226-11ef-aa83-81fbc7dfef58", 00:15:54.045 "assigned_rate_limits": { 00:15:54.045 "rw_ios_per_sec": 0, 00:15:54.045 "rw_mbytes_per_sec": 0, 00:15:54.045 "r_mbytes_per_sec": 0, 00:15:54.045 "w_mbytes_per_sec": 0 00:15:54.045 }, 00:15:54.045 "claimed": true, 00:15:54.045 "claim_type": "exclusive_write", 00:15:54.045 "zoned": false, 00:15:54.045 "supported_io_types": { 00:15:54.045 "read": true, 00:15:54.045 "write": true, 00:15:54.045 "unmap": true, 00:15:54.045 "flush": true, 00:15:54.045 "reset": true, 00:15:54.045 "nvme_admin": false, 00:15:54.045 "nvme_io": false, 00:15:54.045 "nvme_io_md": false, 00:15:54.045 "write_zeroes": true, 00:15:54.045 "zcopy": true, 00:15:54.045 "get_zone_info": false, 00:15:54.045 "zone_management": false, 00:15:54.045 "zone_append": false, 00:15:54.045 "compare": false, 00:15:54.045 "compare_and_write": false, 00:15:54.045 "abort": true, 00:15:54.045 "seek_hole": false, 00:15:54.045 "seek_data": false, 00:15:54.045 "copy": true, 00:15:54.045 "nvme_iov_md": false 00:15:54.045 }, 00:15:54.045 "memory_domains": [ 00:15:54.045 { 00:15:54.045 "dma_device_id": "system", 00:15:54.045 "dma_device_type": 1 00:15:54.045 }, 00:15:54.045 { 00:15:54.045 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:54.045 "dma_device_type": 2 00:15:54.045 } 00:15:54.045 ], 00:15:54.045 "driver_specific": {} 00:15:54.045 } 00:15:54.045 ] 00:15:54.045 21:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:54.045 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:54.045 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:54.045 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:54.045 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:54.045 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:54.045 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:54.045 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:54.045 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:54.045 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:54.045 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:54.045 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:54.045 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:54.045 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:54.045 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:54.304 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:54.304 "name": "Existed_Raid", 00:15:54.304 "uuid": "23526160-4226-11ef-aa83-81fbc7dfef58", 00:15:54.304 "strip_size_kb": 0, 00:15:54.304 "state": "configuring", 00:15:54.304 "raid_level": "raid1", 00:15:54.304 "superblock": true, 00:15:54.304 "num_base_bdevs": 4, 00:15:54.304 "num_base_bdevs_discovered": 2, 00:15:54.304 "num_base_bdevs_operational": 4, 00:15:54.304 "base_bdevs_list": [ 00:15:54.304 { 00:15:54.304 "name": "BaseBdev1", 00:15:54.304 "uuid": "22708901-4226-11ef-aa83-81fbc7dfef58", 00:15:54.304 "is_configured": true, 00:15:54.304 "data_offset": 2048, 00:15:54.304 "data_size": 63488 00:15:54.304 }, 00:15:54.304 { 00:15:54.304 "name": "BaseBdev2", 00:15:54.304 "uuid": "23cf804f-4226-11ef-aa83-81fbc7dfef58", 00:15:54.304 "is_configured": true, 00:15:54.304 "data_offset": 2048, 00:15:54.304 "data_size": 63488 00:15:54.304 }, 00:15:54.304 { 00:15:54.304 "name": "BaseBdev3", 00:15:54.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.304 "is_configured": false, 00:15:54.304 "data_offset": 0, 00:15:54.304 "data_size": 0 00:15:54.304 }, 00:15:54.304 { 00:15:54.304 "name": "BaseBdev4", 00:15:54.304 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:54.304 "is_configured": false, 00:15:54.304 "data_offset": 0, 00:15:54.304 "data_size": 0 00:15:54.304 } 00:15:54.304 ] 00:15:54.304 }' 00:15:54.304 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:54.304 21:15:05 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:54.562 21:15:05 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:15:54.826 [2024-07-14 21:15:06.196842] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:15:54.826 BaseBdev3 00:15:54.826 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev3 00:15:54.826 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:15:54.826 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:54.826 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:54.826 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:54.826 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:54.826 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:15:55.096 [ 00:15:55.096 { 00:15:55.096 "name": "BaseBdev3", 00:15:55.096 "aliases": [ 00:15:55.096 "2498edc2-4226-11ef-aa83-81fbc7dfef58" 00:15:55.096 ], 00:15:55.096 "product_name": "Malloc disk", 00:15:55.096 "block_size": 512, 00:15:55.096 "num_blocks": 65536, 00:15:55.096 "uuid": "2498edc2-4226-11ef-aa83-81fbc7dfef58", 00:15:55.096 "assigned_rate_limits": { 00:15:55.096 "rw_ios_per_sec": 0, 00:15:55.096 "rw_mbytes_per_sec": 0, 00:15:55.096 "r_mbytes_per_sec": 0, 00:15:55.096 "w_mbytes_per_sec": 0 00:15:55.096 }, 00:15:55.096 "claimed": true, 00:15:55.096 "claim_type": "exclusive_write", 00:15:55.096 "zoned": false, 00:15:55.096 "supported_io_types": { 00:15:55.096 "read": true, 00:15:55.096 "write": true, 00:15:55.096 "unmap": true, 00:15:55.096 "flush": true, 00:15:55.096 "reset": true, 00:15:55.096 "nvme_admin": false, 00:15:55.096 "nvme_io": false, 00:15:55.096 "nvme_io_md": false, 00:15:55.096 "write_zeroes": true, 00:15:55.096 "zcopy": true, 00:15:55.096 "get_zone_info": false, 00:15:55.096 "zone_management": false, 00:15:55.096 "zone_append": false, 00:15:55.096 "compare": false, 00:15:55.096 "compare_and_write": false, 00:15:55.096 "abort": true, 00:15:55.096 "seek_hole": false, 00:15:55.096 "seek_data": false, 00:15:55.096 "copy": true, 00:15:55.096 "nvme_iov_md": false 00:15:55.096 }, 00:15:55.096 "memory_domains": [ 00:15:55.096 { 00:15:55.096 "dma_device_id": "system", 00:15:55.096 "dma_device_type": 1 00:15:55.096 }, 00:15:55.096 { 00:15:55.096 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:55.096 "dma_device_type": 2 00:15:55.096 } 00:15:55.096 ], 00:15:55.096 "driver_specific": {} 00:15:55.096 } 00:15:55.096 ] 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:55.096 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:55.354 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:55.354 "name": "Existed_Raid", 00:15:55.354 "uuid": "23526160-4226-11ef-aa83-81fbc7dfef58", 00:15:55.354 "strip_size_kb": 0, 00:15:55.354 "state": "configuring", 00:15:55.354 "raid_level": "raid1", 00:15:55.354 "superblock": true, 00:15:55.354 "num_base_bdevs": 4, 00:15:55.354 "num_base_bdevs_discovered": 3, 00:15:55.354 "num_base_bdevs_operational": 4, 00:15:55.354 "base_bdevs_list": [ 00:15:55.354 { 00:15:55.354 "name": "BaseBdev1", 00:15:55.354 "uuid": "22708901-4226-11ef-aa83-81fbc7dfef58", 00:15:55.354 "is_configured": true, 00:15:55.354 "data_offset": 2048, 00:15:55.354 "data_size": 63488 00:15:55.354 }, 00:15:55.354 { 00:15:55.354 "name": "BaseBdev2", 00:15:55.354 "uuid": "23cf804f-4226-11ef-aa83-81fbc7dfef58", 00:15:55.354 "is_configured": true, 00:15:55.354 "data_offset": 2048, 00:15:55.354 "data_size": 63488 00:15:55.354 }, 00:15:55.354 { 00:15:55.354 "name": "BaseBdev3", 00:15:55.354 "uuid": "2498edc2-4226-11ef-aa83-81fbc7dfef58", 00:15:55.354 "is_configured": true, 00:15:55.354 "data_offset": 2048, 00:15:55.354 "data_size": 63488 00:15:55.354 }, 00:15:55.354 { 00:15:55.354 "name": "BaseBdev4", 00:15:55.354 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:55.354 "is_configured": false, 00:15:55.354 "data_offset": 0, 00:15:55.354 "data_size": 0 00:15:55.354 } 00:15:55.354 ] 00:15:55.354 }' 00:15:55.354 21:15:06 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:55.354 21:15:06 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:55.612 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:15:55.871 [2024-07-14 21:15:07.340858] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:15:55.871 [2024-07-14 21:15:07.340915] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x22f54b834a00 00:15:55.871 [2024-07-14 21:15:07.340921] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:15:55.871 [2024-07-14 21:15:07.340939] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22f54b897e20 00:15:55.871 [2024-07-14 21:15:07.340994] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22f54b834a00 00:15:55.871 [2024-07-14 21:15:07.340998] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x22f54b834a00 00:15:55.871 [2024-07-14 21:15:07.341017] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:15:55.871 BaseBdev4 00:15:55.871 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev4 00:15:55.871 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:15:55.871 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:15:55.871 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:15:55.871 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:15:55.871 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:15:55.871 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:15:56.129 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:15:56.387 [ 00:15:56.387 { 00:15:56.387 "name": "BaseBdev4", 00:15:56.387 "aliases": [ 00:15:56.387 "25477e1c-4226-11ef-aa83-81fbc7dfef58" 00:15:56.387 ], 00:15:56.387 "product_name": "Malloc disk", 00:15:56.387 "block_size": 512, 00:15:56.387 "num_blocks": 65536, 00:15:56.387 "uuid": "25477e1c-4226-11ef-aa83-81fbc7dfef58", 00:15:56.387 "assigned_rate_limits": { 00:15:56.387 "rw_ios_per_sec": 0, 00:15:56.387 "rw_mbytes_per_sec": 0, 00:15:56.387 "r_mbytes_per_sec": 0, 00:15:56.387 "w_mbytes_per_sec": 0 00:15:56.387 }, 00:15:56.387 "claimed": true, 00:15:56.387 "claim_type": "exclusive_write", 00:15:56.387 "zoned": false, 00:15:56.387 "supported_io_types": { 00:15:56.387 "read": true, 00:15:56.387 "write": true, 00:15:56.387 "unmap": true, 00:15:56.387 "flush": true, 00:15:56.387 "reset": true, 00:15:56.387 "nvme_admin": false, 00:15:56.387 "nvme_io": false, 00:15:56.387 "nvme_io_md": false, 00:15:56.387 "write_zeroes": true, 00:15:56.387 "zcopy": true, 00:15:56.387 "get_zone_info": false, 00:15:56.387 "zone_management": false, 00:15:56.387 "zone_append": false, 00:15:56.387 "compare": false, 00:15:56.387 "compare_and_write": false, 00:15:56.387 "abort": true, 00:15:56.387 "seek_hole": false, 00:15:56.387 "seek_data": false, 00:15:56.387 "copy": true, 00:15:56.387 "nvme_iov_md": false 00:15:56.387 }, 00:15:56.387 "memory_domains": [ 00:15:56.387 { 00:15:56.387 "dma_device_id": "system", 00:15:56.387 "dma_device_type": 1 00:15:56.387 }, 00:15:56.387 { 00:15:56.387 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:56.387 "dma_device_type": 2 00:15:56.387 } 00:15:56.387 ], 00:15:56.387 "driver_specific": {} 00:15:56.387 } 00:15:56.387 ] 00:15:56.387 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:15:56.387 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:15:56.387 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:15:56.387 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:15:56.387 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:56.387 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:56.387 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:56.387 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:56.387 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:15:56.387 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:56.387 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:56.387 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:56.387 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:56.388 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:56.388 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:56.647 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:56.647 "name": "Existed_Raid", 00:15:56.647 "uuid": "23526160-4226-11ef-aa83-81fbc7dfef58", 00:15:56.647 "strip_size_kb": 0, 00:15:56.647 "state": "online", 00:15:56.647 "raid_level": "raid1", 00:15:56.647 "superblock": true, 00:15:56.647 "num_base_bdevs": 4, 00:15:56.647 "num_base_bdevs_discovered": 4, 00:15:56.647 "num_base_bdevs_operational": 4, 00:15:56.647 "base_bdevs_list": [ 00:15:56.647 { 00:15:56.647 "name": "BaseBdev1", 00:15:56.647 "uuid": "22708901-4226-11ef-aa83-81fbc7dfef58", 00:15:56.647 "is_configured": true, 00:15:56.647 "data_offset": 2048, 00:15:56.647 "data_size": 63488 00:15:56.647 }, 00:15:56.647 { 00:15:56.647 "name": "BaseBdev2", 00:15:56.647 "uuid": "23cf804f-4226-11ef-aa83-81fbc7dfef58", 00:15:56.647 "is_configured": true, 00:15:56.647 "data_offset": 2048, 00:15:56.647 "data_size": 63488 00:15:56.647 }, 00:15:56.647 { 00:15:56.647 "name": "BaseBdev3", 00:15:56.647 "uuid": "2498edc2-4226-11ef-aa83-81fbc7dfef58", 00:15:56.647 "is_configured": true, 00:15:56.647 "data_offset": 2048, 00:15:56.647 "data_size": 63488 00:15:56.647 }, 00:15:56.647 { 00:15:56.647 "name": "BaseBdev4", 00:15:56.647 "uuid": "25477e1c-4226-11ef-aa83-81fbc7dfef58", 00:15:56.647 "is_configured": true, 00:15:56.647 "data_offset": 2048, 00:15:56.647 "data_size": 63488 00:15:56.647 } 00:15:56.647 ] 00:15:56.647 }' 00:15:56.647 21:15:07 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:56.647 21:15:07 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:56.905 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:15:56.905 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:15:56.905 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:15:56.905 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:15:56.905 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:15:56.905 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:15:56.905 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:15:56.905 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:15:57.164 [2024-07-14 21:15:08.500863] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:15:57.164 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:15:57.164 "name": "Existed_Raid", 00:15:57.164 "aliases": [ 00:15:57.164 "23526160-4226-11ef-aa83-81fbc7dfef58" 00:15:57.164 ], 00:15:57.164 "product_name": "Raid Volume", 00:15:57.164 "block_size": 512, 00:15:57.164 "num_blocks": 63488, 00:15:57.164 "uuid": "23526160-4226-11ef-aa83-81fbc7dfef58", 00:15:57.164 "assigned_rate_limits": { 00:15:57.164 "rw_ios_per_sec": 0, 00:15:57.164 "rw_mbytes_per_sec": 0, 00:15:57.164 "r_mbytes_per_sec": 0, 00:15:57.164 "w_mbytes_per_sec": 0 00:15:57.164 }, 00:15:57.164 "claimed": false, 00:15:57.164 "zoned": false, 00:15:57.164 "supported_io_types": { 00:15:57.164 "read": true, 00:15:57.164 "write": true, 00:15:57.164 "unmap": false, 00:15:57.164 "flush": false, 00:15:57.164 "reset": true, 00:15:57.164 "nvme_admin": false, 00:15:57.164 "nvme_io": false, 00:15:57.164 "nvme_io_md": false, 00:15:57.164 "write_zeroes": true, 00:15:57.164 "zcopy": false, 00:15:57.164 "get_zone_info": false, 00:15:57.164 "zone_management": false, 00:15:57.164 "zone_append": false, 00:15:57.164 "compare": false, 00:15:57.164 "compare_and_write": false, 00:15:57.164 "abort": false, 00:15:57.164 "seek_hole": false, 00:15:57.164 "seek_data": false, 00:15:57.164 "copy": false, 00:15:57.164 "nvme_iov_md": false 00:15:57.164 }, 00:15:57.164 "memory_domains": [ 00:15:57.164 { 00:15:57.164 "dma_device_id": "system", 00:15:57.164 "dma_device_type": 1 00:15:57.164 }, 00:15:57.164 { 00:15:57.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.164 "dma_device_type": 2 00:15:57.164 }, 00:15:57.164 { 00:15:57.164 "dma_device_id": "system", 00:15:57.164 "dma_device_type": 1 00:15:57.164 }, 00:15:57.164 { 00:15:57.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.164 "dma_device_type": 2 00:15:57.164 }, 00:15:57.164 { 00:15:57.164 "dma_device_id": "system", 00:15:57.164 "dma_device_type": 1 00:15:57.164 }, 00:15:57.164 { 00:15:57.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.164 "dma_device_type": 2 00:15:57.164 }, 00:15:57.164 { 00:15:57.164 "dma_device_id": "system", 00:15:57.164 "dma_device_type": 1 00:15:57.164 }, 00:15:57.164 { 00:15:57.164 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.164 "dma_device_type": 2 00:15:57.164 } 00:15:57.164 ], 00:15:57.164 "driver_specific": { 00:15:57.164 "raid": { 00:15:57.164 "uuid": "23526160-4226-11ef-aa83-81fbc7dfef58", 00:15:57.164 "strip_size_kb": 0, 00:15:57.164 "state": "online", 00:15:57.164 "raid_level": "raid1", 00:15:57.164 "superblock": true, 00:15:57.164 "num_base_bdevs": 4, 00:15:57.164 "num_base_bdevs_discovered": 4, 00:15:57.164 "num_base_bdevs_operational": 4, 00:15:57.164 "base_bdevs_list": [ 00:15:57.164 { 00:15:57.164 "name": "BaseBdev1", 00:15:57.164 "uuid": "22708901-4226-11ef-aa83-81fbc7dfef58", 00:15:57.164 "is_configured": true, 00:15:57.164 "data_offset": 2048, 00:15:57.164 "data_size": 63488 00:15:57.164 }, 00:15:57.164 { 00:15:57.164 "name": "BaseBdev2", 00:15:57.164 "uuid": "23cf804f-4226-11ef-aa83-81fbc7dfef58", 00:15:57.164 "is_configured": true, 00:15:57.164 "data_offset": 2048, 00:15:57.164 "data_size": 63488 00:15:57.164 }, 00:15:57.164 { 00:15:57.164 "name": "BaseBdev3", 00:15:57.164 "uuid": "2498edc2-4226-11ef-aa83-81fbc7dfef58", 00:15:57.164 "is_configured": true, 00:15:57.164 "data_offset": 2048, 00:15:57.164 "data_size": 63488 00:15:57.164 }, 00:15:57.164 { 00:15:57.164 "name": "BaseBdev4", 00:15:57.164 "uuid": "25477e1c-4226-11ef-aa83-81fbc7dfef58", 00:15:57.164 "is_configured": true, 00:15:57.164 "data_offset": 2048, 00:15:57.164 "data_size": 63488 00:15:57.164 } 00:15:57.164 ] 00:15:57.164 } 00:15:57.164 } 00:15:57.164 }' 00:15:57.164 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:15:57.164 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:15:57.164 BaseBdev2 00:15:57.164 BaseBdev3 00:15:57.164 BaseBdev4' 00:15:57.164 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:57.164 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:15:57.164 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.423 "name": "BaseBdev1", 00:15:57.423 "aliases": [ 00:15:57.423 "22708901-4226-11ef-aa83-81fbc7dfef58" 00:15:57.423 ], 00:15:57.423 "product_name": "Malloc disk", 00:15:57.423 "block_size": 512, 00:15:57.423 "num_blocks": 65536, 00:15:57.423 "uuid": "22708901-4226-11ef-aa83-81fbc7dfef58", 00:15:57.423 "assigned_rate_limits": { 00:15:57.423 "rw_ios_per_sec": 0, 00:15:57.423 "rw_mbytes_per_sec": 0, 00:15:57.423 "r_mbytes_per_sec": 0, 00:15:57.423 "w_mbytes_per_sec": 0 00:15:57.423 }, 00:15:57.423 "claimed": true, 00:15:57.423 "claim_type": "exclusive_write", 00:15:57.423 "zoned": false, 00:15:57.423 "supported_io_types": { 00:15:57.423 "read": true, 00:15:57.423 "write": true, 00:15:57.423 "unmap": true, 00:15:57.423 "flush": true, 00:15:57.423 "reset": true, 00:15:57.423 "nvme_admin": false, 00:15:57.423 "nvme_io": false, 00:15:57.423 "nvme_io_md": false, 00:15:57.423 "write_zeroes": true, 00:15:57.423 "zcopy": true, 00:15:57.423 "get_zone_info": false, 00:15:57.423 "zone_management": false, 00:15:57.423 "zone_append": false, 00:15:57.423 "compare": false, 00:15:57.423 "compare_and_write": false, 00:15:57.423 "abort": true, 00:15:57.423 "seek_hole": false, 00:15:57.423 "seek_data": false, 00:15:57.423 "copy": true, 00:15:57.423 "nvme_iov_md": false 00:15:57.423 }, 00:15:57.423 "memory_domains": [ 00:15:57.423 { 00:15:57.423 "dma_device_id": "system", 00:15:57.423 "dma_device_type": 1 00:15:57.423 }, 00:15:57.423 { 00:15:57.423 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.423 "dma_device_type": 2 00:15:57.423 } 00:15:57.423 ], 00:15:57.423 "driver_specific": {} 00:15:57.423 }' 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:15:57.423 21:15:08 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.682 "name": "BaseBdev2", 00:15:57.682 "aliases": [ 00:15:57.682 "23cf804f-4226-11ef-aa83-81fbc7dfef58" 00:15:57.682 ], 00:15:57.682 "product_name": "Malloc disk", 00:15:57.682 "block_size": 512, 00:15:57.682 "num_blocks": 65536, 00:15:57.682 "uuid": "23cf804f-4226-11ef-aa83-81fbc7dfef58", 00:15:57.682 "assigned_rate_limits": { 00:15:57.682 "rw_ios_per_sec": 0, 00:15:57.682 "rw_mbytes_per_sec": 0, 00:15:57.682 "r_mbytes_per_sec": 0, 00:15:57.682 "w_mbytes_per_sec": 0 00:15:57.682 }, 00:15:57.682 "claimed": true, 00:15:57.682 "claim_type": "exclusive_write", 00:15:57.682 "zoned": false, 00:15:57.682 "supported_io_types": { 00:15:57.682 "read": true, 00:15:57.682 "write": true, 00:15:57.682 "unmap": true, 00:15:57.682 "flush": true, 00:15:57.682 "reset": true, 00:15:57.682 "nvme_admin": false, 00:15:57.682 "nvme_io": false, 00:15:57.682 "nvme_io_md": false, 00:15:57.682 "write_zeroes": true, 00:15:57.682 "zcopy": true, 00:15:57.682 "get_zone_info": false, 00:15:57.682 "zone_management": false, 00:15:57.682 "zone_append": false, 00:15:57.682 "compare": false, 00:15:57.682 "compare_and_write": false, 00:15:57.682 "abort": true, 00:15:57.682 "seek_hole": false, 00:15:57.682 "seek_data": false, 00:15:57.682 "copy": true, 00:15:57.682 "nvme_iov_md": false 00:15:57.682 }, 00:15:57.682 "memory_domains": [ 00:15:57.682 { 00:15:57.682 "dma_device_id": "system", 00:15:57.682 "dma_device_type": 1 00:15:57.682 }, 00:15:57.682 { 00:15:57.682 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.682 "dma_device_type": 2 00:15:57.682 } 00:15:57.682 ], 00:15:57.682 "driver_specific": {} 00:15:57.682 }' 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:15:57.682 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:57.942 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:57.942 "name": "BaseBdev3", 00:15:57.942 "aliases": [ 00:15:57.942 "2498edc2-4226-11ef-aa83-81fbc7dfef58" 00:15:57.942 ], 00:15:57.942 "product_name": "Malloc disk", 00:15:57.942 "block_size": 512, 00:15:57.942 "num_blocks": 65536, 00:15:57.942 "uuid": "2498edc2-4226-11ef-aa83-81fbc7dfef58", 00:15:57.942 "assigned_rate_limits": { 00:15:57.942 "rw_ios_per_sec": 0, 00:15:57.942 "rw_mbytes_per_sec": 0, 00:15:57.942 "r_mbytes_per_sec": 0, 00:15:57.942 "w_mbytes_per_sec": 0 00:15:57.942 }, 00:15:57.942 "claimed": true, 00:15:57.942 "claim_type": "exclusive_write", 00:15:57.942 "zoned": false, 00:15:57.942 "supported_io_types": { 00:15:57.942 "read": true, 00:15:57.942 "write": true, 00:15:57.942 "unmap": true, 00:15:57.942 "flush": true, 00:15:57.942 "reset": true, 00:15:57.942 "nvme_admin": false, 00:15:57.942 "nvme_io": false, 00:15:57.942 "nvme_io_md": false, 00:15:57.942 "write_zeroes": true, 00:15:57.942 "zcopy": true, 00:15:57.942 "get_zone_info": false, 00:15:57.942 "zone_management": false, 00:15:57.942 "zone_append": false, 00:15:57.942 "compare": false, 00:15:57.942 "compare_and_write": false, 00:15:57.942 "abort": true, 00:15:57.942 "seek_hole": false, 00:15:57.942 "seek_data": false, 00:15:57.942 "copy": true, 00:15:57.942 "nvme_iov_md": false 00:15:57.942 }, 00:15:57.942 "memory_domains": [ 00:15:57.942 { 00:15:57.942 "dma_device_id": "system", 00:15:57.942 "dma_device_type": 1 00:15:57.942 }, 00:15:57.942 { 00:15:57.942 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:57.942 "dma_device_type": 2 00:15:57.942 } 00:15:57.942 ], 00:15:57.942 "driver_specific": {} 00:15:57.942 }' 00:15:57.942 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.942 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:57.942 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:57.942 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.942 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:57.942 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:57.942 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:57.942 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.231 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:58.231 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.231 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.231 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:58.231 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:15:58.231 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:15:58.231 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:15:58.490 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:15:58.490 "name": "BaseBdev4", 00:15:58.490 "aliases": [ 00:15:58.490 "25477e1c-4226-11ef-aa83-81fbc7dfef58" 00:15:58.490 ], 00:15:58.490 "product_name": "Malloc disk", 00:15:58.490 "block_size": 512, 00:15:58.490 "num_blocks": 65536, 00:15:58.490 "uuid": "25477e1c-4226-11ef-aa83-81fbc7dfef58", 00:15:58.490 "assigned_rate_limits": { 00:15:58.490 "rw_ios_per_sec": 0, 00:15:58.490 "rw_mbytes_per_sec": 0, 00:15:58.490 "r_mbytes_per_sec": 0, 00:15:58.490 "w_mbytes_per_sec": 0 00:15:58.490 }, 00:15:58.490 "claimed": true, 00:15:58.490 "claim_type": "exclusive_write", 00:15:58.490 "zoned": false, 00:15:58.490 "supported_io_types": { 00:15:58.490 "read": true, 00:15:58.490 "write": true, 00:15:58.490 "unmap": true, 00:15:58.490 "flush": true, 00:15:58.490 "reset": true, 00:15:58.490 "nvme_admin": false, 00:15:58.490 "nvme_io": false, 00:15:58.490 "nvme_io_md": false, 00:15:58.490 "write_zeroes": true, 00:15:58.490 "zcopy": true, 00:15:58.490 "get_zone_info": false, 00:15:58.490 "zone_management": false, 00:15:58.490 "zone_append": false, 00:15:58.490 "compare": false, 00:15:58.490 "compare_and_write": false, 00:15:58.490 "abort": true, 00:15:58.490 "seek_hole": false, 00:15:58.490 "seek_data": false, 00:15:58.490 "copy": true, 00:15:58.490 "nvme_iov_md": false 00:15:58.490 }, 00:15:58.490 "memory_domains": [ 00:15:58.490 { 00:15:58.490 "dma_device_id": "system", 00:15:58.490 "dma_device_type": 1 00:15:58.490 }, 00:15:58.490 { 00:15:58.490 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:15:58.490 "dma_device_type": 2 00:15:58.490 } 00:15:58.490 ], 00:15:58.490 "driver_specific": {} 00:15:58.490 }' 00:15:58.490 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.490 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:15:58.490 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:15:58.490 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.490 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:15:58.490 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:15:58.490 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.490 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:15:58.490 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:15:58.490 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.490 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:15:58.490 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:15:58.490 21:15:09 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:15:58.749 [2024-07-14 21:15:10.116960] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@275 -- # local expected_state 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@213 -- # case $1 in 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@214 -- # return 0 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 3 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:58.749 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:15:59.008 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:15:59.008 "name": "Existed_Raid", 00:15:59.008 "uuid": "23526160-4226-11ef-aa83-81fbc7dfef58", 00:15:59.008 "strip_size_kb": 0, 00:15:59.008 "state": "online", 00:15:59.008 "raid_level": "raid1", 00:15:59.008 "superblock": true, 00:15:59.008 "num_base_bdevs": 4, 00:15:59.008 "num_base_bdevs_discovered": 3, 00:15:59.008 "num_base_bdevs_operational": 3, 00:15:59.008 "base_bdevs_list": [ 00:15:59.008 { 00:15:59.008 "name": null, 00:15:59.008 "uuid": "00000000-0000-0000-0000-000000000000", 00:15:59.008 "is_configured": false, 00:15:59.008 "data_offset": 2048, 00:15:59.008 "data_size": 63488 00:15:59.008 }, 00:15:59.008 { 00:15:59.008 "name": "BaseBdev2", 00:15:59.008 "uuid": "23cf804f-4226-11ef-aa83-81fbc7dfef58", 00:15:59.008 "is_configured": true, 00:15:59.008 "data_offset": 2048, 00:15:59.008 "data_size": 63488 00:15:59.008 }, 00:15:59.008 { 00:15:59.008 "name": "BaseBdev3", 00:15:59.008 "uuid": "2498edc2-4226-11ef-aa83-81fbc7dfef58", 00:15:59.008 "is_configured": true, 00:15:59.008 "data_offset": 2048, 00:15:59.008 "data_size": 63488 00:15:59.008 }, 00:15:59.008 { 00:15:59.008 "name": "BaseBdev4", 00:15:59.008 "uuid": "25477e1c-4226-11ef-aa83-81fbc7dfef58", 00:15:59.008 "is_configured": true, 00:15:59.008 "data_offset": 2048, 00:15:59.008 "data_size": 63488 00:15:59.008 } 00:15:59.008 ] 00:15:59.008 }' 00:15:59.008 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:15:59.008 21:15:10 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:15:59.267 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:15:59.267 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:59.267 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:15:59.267 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.525 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:15:59.526 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:15:59.526 21:15:10 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:15:59.790 [2024-07-14 21:15:11.193538] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:15:59.790 21:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:15:59.790 21:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:15:59.790 21:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:15:59.790 21:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:00.049 21:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:00.049 21:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:00.049 21:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev3 00:16:00.307 [2024-07-14 21:15:11.617994] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:00.307 21:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:00.307 21:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:00.307 21:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:00.307 21:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.565 21:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:00.565 21:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:00.565 21:15:11 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev4 00:16:00.823 [2024-07-14 21:15:12.118480] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev4 00:16:00.823 [2024-07-14 21:15:12.118518] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:00.823 [2024-07-14 21:15:12.126886] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:00.823 [2024-07-14 21:15:12.126908] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:00.823 [2024-07-14 21:15:12.126912] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22f54b834a00 name Existed_Raid, state offline 00:16:00.823 21:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:00.823 21:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:00.823 21:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:00.823 21:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:01.082 21:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:01.082 21:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:01.082 21:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@299 -- # '[' 4 -gt 2 ']' 00:16:01.082 21:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i = 1 )) 00:16:01.082 21:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:01.082 21:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2 00:16:01.341 BaseBdev2 00:16:01.341 21:15:12 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev2 00:16:01.341 21:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:01.341 21:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:01.341 21:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:01.341 21:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:01.341 21:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:01.341 21:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:01.600 21:15:12 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:01.600 [ 00:16:01.600 { 00:16:01.600 "name": "BaseBdev2", 00:16:01.600 "aliases": [ 00:16:01.600 "286feb85-4226-11ef-aa83-81fbc7dfef58" 00:16:01.600 ], 00:16:01.600 "product_name": "Malloc disk", 00:16:01.600 "block_size": 512, 00:16:01.600 "num_blocks": 65536, 00:16:01.600 "uuid": "286feb85-4226-11ef-aa83-81fbc7dfef58", 00:16:01.600 "assigned_rate_limits": { 00:16:01.600 "rw_ios_per_sec": 0, 00:16:01.600 "rw_mbytes_per_sec": 0, 00:16:01.600 "r_mbytes_per_sec": 0, 00:16:01.600 "w_mbytes_per_sec": 0 00:16:01.600 }, 00:16:01.600 "claimed": false, 00:16:01.600 "zoned": false, 00:16:01.600 "supported_io_types": { 00:16:01.600 "read": true, 00:16:01.600 "write": true, 00:16:01.600 "unmap": true, 00:16:01.600 "flush": true, 00:16:01.600 "reset": true, 00:16:01.600 "nvme_admin": false, 00:16:01.600 "nvme_io": false, 00:16:01.600 "nvme_io_md": false, 00:16:01.600 "write_zeroes": true, 00:16:01.600 "zcopy": true, 00:16:01.600 "get_zone_info": false, 00:16:01.600 "zone_management": false, 00:16:01.600 "zone_append": false, 00:16:01.600 "compare": false, 00:16:01.600 "compare_and_write": false, 00:16:01.600 "abort": true, 00:16:01.601 "seek_hole": false, 00:16:01.601 "seek_data": false, 00:16:01.601 "copy": true, 00:16:01.601 "nvme_iov_md": false 00:16:01.601 }, 00:16:01.601 "memory_domains": [ 00:16:01.601 { 00:16:01.601 "dma_device_id": "system", 00:16:01.601 "dma_device_type": 1 00:16:01.601 }, 00:16:01.601 { 00:16:01.601 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:01.601 "dma_device_type": 2 00:16:01.601 } 00:16:01.601 ], 00:16:01.601 "driver_specific": {} 00:16:01.601 } 00:16:01.601 ] 00:16:01.601 21:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:01.601 21:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:01.601 21:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:01.601 21:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3 00:16:01.858 BaseBdev3 00:16:01.858 21:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev3 00:16:01.858 21:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev3 00:16:01.858 21:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:01.858 21:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:01.858 21:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:01.858 21:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:01.858 21:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:02.116 21:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 -t 2000 00:16:02.374 [ 00:16:02.374 { 00:16:02.374 "name": "BaseBdev3", 00:16:02.374 "aliases": [ 00:16:02.374 "28d53c81-4226-11ef-aa83-81fbc7dfef58" 00:16:02.374 ], 00:16:02.374 "product_name": "Malloc disk", 00:16:02.374 "block_size": 512, 00:16:02.374 "num_blocks": 65536, 00:16:02.374 "uuid": "28d53c81-4226-11ef-aa83-81fbc7dfef58", 00:16:02.374 "assigned_rate_limits": { 00:16:02.374 "rw_ios_per_sec": 0, 00:16:02.374 "rw_mbytes_per_sec": 0, 00:16:02.374 "r_mbytes_per_sec": 0, 00:16:02.374 "w_mbytes_per_sec": 0 00:16:02.374 }, 00:16:02.374 "claimed": false, 00:16:02.374 "zoned": false, 00:16:02.374 "supported_io_types": { 00:16:02.374 "read": true, 00:16:02.374 "write": true, 00:16:02.374 "unmap": true, 00:16:02.374 "flush": true, 00:16:02.374 "reset": true, 00:16:02.374 "nvme_admin": false, 00:16:02.374 "nvme_io": false, 00:16:02.374 "nvme_io_md": false, 00:16:02.374 "write_zeroes": true, 00:16:02.374 "zcopy": true, 00:16:02.374 "get_zone_info": false, 00:16:02.374 "zone_management": false, 00:16:02.374 "zone_append": false, 00:16:02.374 "compare": false, 00:16:02.374 "compare_and_write": false, 00:16:02.374 "abort": true, 00:16:02.374 "seek_hole": false, 00:16:02.374 "seek_data": false, 00:16:02.374 "copy": true, 00:16:02.374 "nvme_iov_md": false 00:16:02.374 }, 00:16:02.374 "memory_domains": [ 00:16:02.374 { 00:16:02.374 "dma_device_id": "system", 00:16:02.374 "dma_device_type": 1 00:16:02.374 }, 00:16:02.374 { 00:16:02.374 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:02.374 "dma_device_type": 2 00:16:02.374 } 00:16:02.374 ], 00:16:02.374 "driver_specific": {} 00:16:02.374 } 00:16:02.374 ] 00:16:02.374 21:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:02.374 21:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:02.374 21:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:02.374 21:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@302 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4 00:16:02.633 BaseBdev4 00:16:02.633 21:15:13 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@303 -- # waitforbdev BaseBdev4 00:16:02.633 21:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev4 00:16:02.633 21:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:02.633 21:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:02.633 21:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:02.633 21:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:02.633 21:15:13 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:02.891 21:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 -t 2000 00:16:03.150 [ 00:16:03.150 { 00:16:03.150 "name": "BaseBdev4", 00:16:03.150 "aliases": [ 00:16:03.150 "2935104a-4226-11ef-aa83-81fbc7dfef58" 00:16:03.150 ], 00:16:03.150 "product_name": "Malloc disk", 00:16:03.150 "block_size": 512, 00:16:03.150 "num_blocks": 65536, 00:16:03.150 "uuid": "2935104a-4226-11ef-aa83-81fbc7dfef58", 00:16:03.150 "assigned_rate_limits": { 00:16:03.150 "rw_ios_per_sec": 0, 00:16:03.150 "rw_mbytes_per_sec": 0, 00:16:03.150 "r_mbytes_per_sec": 0, 00:16:03.150 "w_mbytes_per_sec": 0 00:16:03.150 }, 00:16:03.150 "claimed": false, 00:16:03.150 "zoned": false, 00:16:03.150 "supported_io_types": { 00:16:03.150 "read": true, 00:16:03.150 "write": true, 00:16:03.150 "unmap": true, 00:16:03.150 "flush": true, 00:16:03.150 "reset": true, 00:16:03.150 "nvme_admin": false, 00:16:03.150 "nvme_io": false, 00:16:03.150 "nvme_io_md": false, 00:16:03.150 "write_zeroes": true, 00:16:03.150 "zcopy": true, 00:16:03.150 "get_zone_info": false, 00:16:03.150 "zone_management": false, 00:16:03.150 "zone_append": false, 00:16:03.150 "compare": false, 00:16:03.150 "compare_and_write": false, 00:16:03.150 "abort": true, 00:16:03.150 "seek_hole": false, 00:16:03.150 "seek_data": false, 00:16:03.150 "copy": true, 00:16:03.150 "nvme_iov_md": false 00:16:03.150 }, 00:16:03.150 "memory_domains": [ 00:16:03.150 { 00:16:03.150 "dma_device_id": "system", 00:16:03.150 "dma_device_type": 1 00:16:03.150 }, 00:16:03.150 { 00:16:03.150 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:03.150 "dma_device_type": 2 00:16:03.150 } 00:16:03.150 ], 00:16:03.150 "driver_specific": {} 00:16:03.150 } 00:16:03.150 ] 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i++ )) 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@301 -- # (( i < num_base_bdevs )) 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@305 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n Existed_Raid 00:16:03.150 [2024-07-14 21:15:14.658869] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:03.150 [2024-07-14 21:15:14.658927] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:03.150 [2024-07-14 21:15:14.658934] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:03.150 [2024-07-14 21:15:14.659342] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:03.150 [2024-07-14 21:15:14.659367] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@306 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.150 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:03.410 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:03.410 "name": "Existed_Raid", 00:16:03.410 "uuid": "29a425e6-4226-11ef-aa83-81fbc7dfef58", 00:16:03.410 "strip_size_kb": 0, 00:16:03.410 "state": "configuring", 00:16:03.410 "raid_level": "raid1", 00:16:03.410 "superblock": true, 00:16:03.410 "num_base_bdevs": 4, 00:16:03.410 "num_base_bdevs_discovered": 3, 00:16:03.410 "num_base_bdevs_operational": 4, 00:16:03.410 "base_bdevs_list": [ 00:16:03.410 { 00:16:03.410 "name": "BaseBdev1", 00:16:03.410 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:03.410 "is_configured": false, 00:16:03.410 "data_offset": 0, 00:16:03.410 "data_size": 0 00:16:03.410 }, 00:16:03.410 { 00:16:03.410 "name": "BaseBdev2", 00:16:03.410 "uuid": "286feb85-4226-11ef-aa83-81fbc7dfef58", 00:16:03.410 "is_configured": true, 00:16:03.410 "data_offset": 2048, 00:16:03.410 "data_size": 63488 00:16:03.410 }, 00:16:03.410 { 00:16:03.410 "name": "BaseBdev3", 00:16:03.410 "uuid": "28d53c81-4226-11ef-aa83-81fbc7dfef58", 00:16:03.410 "is_configured": true, 00:16:03.410 "data_offset": 2048, 00:16:03.410 "data_size": 63488 00:16:03.410 }, 00:16:03.410 { 00:16:03.410 "name": "BaseBdev4", 00:16:03.410 "uuid": "2935104a-4226-11ef-aa83-81fbc7dfef58", 00:16:03.410 "is_configured": true, 00:16:03.410 "data_offset": 2048, 00:16:03.410 "data_size": 63488 00:16:03.410 } 00:16:03.410 ] 00:16:03.410 }' 00:16:03.410 21:15:14 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:03.410 21:15:14 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:03.669 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@308 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev2 00:16:03.928 [2024-07-14 21:15:15.382914] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:03.928 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@309 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:03.928 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:03.928 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:03.928 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:03.928 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:03.928 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:03.928 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:03.928 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:03.928 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:03.928 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:03.928 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:03.928 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:04.187 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:04.187 "name": "Existed_Raid", 00:16:04.187 "uuid": "29a425e6-4226-11ef-aa83-81fbc7dfef58", 00:16:04.187 "strip_size_kb": 0, 00:16:04.187 "state": "configuring", 00:16:04.187 "raid_level": "raid1", 00:16:04.187 "superblock": true, 00:16:04.187 "num_base_bdevs": 4, 00:16:04.187 "num_base_bdevs_discovered": 2, 00:16:04.187 "num_base_bdevs_operational": 4, 00:16:04.187 "base_bdevs_list": [ 00:16:04.187 { 00:16:04.187 "name": "BaseBdev1", 00:16:04.187 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:04.187 "is_configured": false, 00:16:04.187 "data_offset": 0, 00:16:04.187 "data_size": 0 00:16:04.187 }, 00:16:04.187 { 00:16:04.187 "name": null, 00:16:04.187 "uuid": "286feb85-4226-11ef-aa83-81fbc7dfef58", 00:16:04.187 "is_configured": false, 00:16:04.187 "data_offset": 2048, 00:16:04.187 "data_size": 63488 00:16:04.187 }, 00:16:04.187 { 00:16:04.187 "name": "BaseBdev3", 00:16:04.187 "uuid": "28d53c81-4226-11ef-aa83-81fbc7dfef58", 00:16:04.187 "is_configured": true, 00:16:04.187 "data_offset": 2048, 00:16:04.187 "data_size": 63488 00:16:04.187 }, 00:16:04.187 { 00:16:04.187 "name": "BaseBdev4", 00:16:04.187 "uuid": "2935104a-4226-11ef-aa83-81fbc7dfef58", 00:16:04.187 "is_configured": true, 00:16:04.187 "data_offset": 2048, 00:16:04.187 "data_size": 63488 00:16:04.187 } 00:16:04.187 ] 00:16:04.187 }' 00:16:04.187 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:04.187 21:15:15 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:04.445 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:04.446 21:15:15 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:04.704 21:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@310 -- # [[ false == \f\a\l\s\e ]] 00:16:04.704 21:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@312 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1 00:16:04.961 [2024-07-14 21:15:16.315074] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:04.961 BaseBdev1 00:16:04.961 21:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@313 -- # waitforbdev BaseBdev1 00:16:04.961 21:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:04.961 21:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:04.961 21:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:04.961 21:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:04.961 21:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:04.961 21:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:05.219 21:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:05.478 [ 00:16:05.478 { 00:16:05.478 "name": "BaseBdev1", 00:16:05.478 "aliases": [ 00:16:05.478 "2aa0d7f1-4226-11ef-aa83-81fbc7dfef58" 00:16:05.478 ], 00:16:05.478 "product_name": "Malloc disk", 00:16:05.478 "block_size": 512, 00:16:05.478 "num_blocks": 65536, 00:16:05.478 "uuid": "2aa0d7f1-4226-11ef-aa83-81fbc7dfef58", 00:16:05.478 "assigned_rate_limits": { 00:16:05.478 "rw_ios_per_sec": 0, 00:16:05.478 "rw_mbytes_per_sec": 0, 00:16:05.478 "r_mbytes_per_sec": 0, 00:16:05.478 "w_mbytes_per_sec": 0 00:16:05.478 }, 00:16:05.478 "claimed": true, 00:16:05.478 "claim_type": "exclusive_write", 00:16:05.478 "zoned": false, 00:16:05.478 "supported_io_types": { 00:16:05.478 "read": true, 00:16:05.478 "write": true, 00:16:05.478 "unmap": true, 00:16:05.478 "flush": true, 00:16:05.478 "reset": true, 00:16:05.478 "nvme_admin": false, 00:16:05.478 "nvme_io": false, 00:16:05.478 "nvme_io_md": false, 00:16:05.478 "write_zeroes": true, 00:16:05.478 "zcopy": true, 00:16:05.478 "get_zone_info": false, 00:16:05.478 "zone_management": false, 00:16:05.478 "zone_append": false, 00:16:05.478 "compare": false, 00:16:05.478 "compare_and_write": false, 00:16:05.478 "abort": true, 00:16:05.478 "seek_hole": false, 00:16:05.478 "seek_data": false, 00:16:05.478 "copy": true, 00:16:05.478 "nvme_iov_md": false 00:16:05.478 }, 00:16:05.478 "memory_domains": [ 00:16:05.478 { 00:16:05.478 "dma_device_id": "system", 00:16:05.478 "dma_device_type": 1 00:16:05.478 }, 00:16:05.478 { 00:16:05.478 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:05.478 "dma_device_type": 2 00:16:05.478 } 00:16:05.478 ], 00:16:05.478 "driver_specific": {} 00:16:05.478 } 00:16:05.478 ] 00:16:05.478 21:15:16 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:05.478 21:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@314 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:05.478 21:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:05.478 21:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:05.478 21:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:05.478 21:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:05.478 21:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:05.478 21:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:05.478 21:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:05.478 21:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:05.478 21:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:05.478 21:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:05.478 21:15:16 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.737 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:05.737 "name": "Existed_Raid", 00:16:05.737 "uuid": "29a425e6-4226-11ef-aa83-81fbc7dfef58", 00:16:05.737 "strip_size_kb": 0, 00:16:05.737 "state": "configuring", 00:16:05.737 "raid_level": "raid1", 00:16:05.737 "superblock": true, 00:16:05.737 "num_base_bdevs": 4, 00:16:05.737 "num_base_bdevs_discovered": 3, 00:16:05.737 "num_base_bdevs_operational": 4, 00:16:05.737 "base_bdevs_list": [ 00:16:05.737 { 00:16:05.737 "name": "BaseBdev1", 00:16:05.737 "uuid": "2aa0d7f1-4226-11ef-aa83-81fbc7dfef58", 00:16:05.737 "is_configured": true, 00:16:05.737 "data_offset": 2048, 00:16:05.737 "data_size": 63488 00:16:05.737 }, 00:16:05.737 { 00:16:05.737 "name": null, 00:16:05.737 "uuid": "286feb85-4226-11ef-aa83-81fbc7dfef58", 00:16:05.737 "is_configured": false, 00:16:05.737 "data_offset": 2048, 00:16:05.737 "data_size": 63488 00:16:05.737 }, 00:16:05.737 { 00:16:05.737 "name": "BaseBdev3", 00:16:05.737 "uuid": "28d53c81-4226-11ef-aa83-81fbc7dfef58", 00:16:05.737 "is_configured": true, 00:16:05.737 "data_offset": 2048, 00:16:05.737 "data_size": 63488 00:16:05.737 }, 00:16:05.737 { 00:16:05.737 "name": "BaseBdev4", 00:16:05.737 "uuid": "2935104a-4226-11ef-aa83-81fbc7dfef58", 00:16:05.737 "is_configured": true, 00:16:05.737 "data_offset": 2048, 00:16:05.738 "data_size": 63488 00:16:05.738 } 00:16:05.738 ] 00:16:05.738 }' 00:16:05.738 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:05.738 21:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:05.738 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:05.738 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:05.996 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@315 -- # [[ true == \t\r\u\e ]] 00:16:05.996 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@317 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev3 00:16:06.255 [2024-07-14 21:15:17.690894] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev3 00:16:06.255 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@318 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:06.255 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:06.255 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:06.255 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:06.255 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:06.255 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:06.255 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:06.255 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:06.255 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:06.255 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:06.255 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.255 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:06.514 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:06.514 "name": "Existed_Raid", 00:16:06.514 "uuid": "29a425e6-4226-11ef-aa83-81fbc7dfef58", 00:16:06.514 "strip_size_kb": 0, 00:16:06.514 "state": "configuring", 00:16:06.514 "raid_level": "raid1", 00:16:06.514 "superblock": true, 00:16:06.514 "num_base_bdevs": 4, 00:16:06.514 "num_base_bdevs_discovered": 2, 00:16:06.514 "num_base_bdevs_operational": 4, 00:16:06.514 "base_bdevs_list": [ 00:16:06.514 { 00:16:06.514 "name": "BaseBdev1", 00:16:06.514 "uuid": "2aa0d7f1-4226-11ef-aa83-81fbc7dfef58", 00:16:06.514 "is_configured": true, 00:16:06.514 "data_offset": 2048, 00:16:06.514 "data_size": 63488 00:16:06.514 }, 00:16:06.514 { 00:16:06.514 "name": null, 00:16:06.514 "uuid": "286feb85-4226-11ef-aa83-81fbc7dfef58", 00:16:06.514 "is_configured": false, 00:16:06.514 "data_offset": 2048, 00:16:06.514 "data_size": 63488 00:16:06.514 }, 00:16:06.514 { 00:16:06.514 "name": null, 00:16:06.514 "uuid": "28d53c81-4226-11ef-aa83-81fbc7dfef58", 00:16:06.514 "is_configured": false, 00:16:06.514 "data_offset": 2048, 00:16:06.514 "data_size": 63488 00:16:06.514 }, 00:16:06.514 { 00:16:06.514 "name": "BaseBdev4", 00:16:06.514 "uuid": "2935104a-4226-11ef-aa83-81fbc7dfef58", 00:16:06.514 "is_configured": true, 00:16:06.514 "data_offset": 2048, 00:16:06.514 "data_size": 63488 00:16:06.514 } 00:16:06.514 ] 00:16:06.514 }' 00:16:06.514 21:15:17 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:06.514 21:15:17 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:06.773 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:06.773 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:07.031 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@319 -- # [[ false == \f\a\l\s\e ]] 00:16:07.031 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@321 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev3 00:16:07.292 [2024-07-14 21:15:18.738898] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:07.292 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@322 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:07.292 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:07.292 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:07.292 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:07.292 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:07.292 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:07.292 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:07.292 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:07.292 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:07.292 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:07.292 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.292 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:07.551 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:07.551 "name": "Existed_Raid", 00:16:07.551 "uuid": "29a425e6-4226-11ef-aa83-81fbc7dfef58", 00:16:07.551 "strip_size_kb": 0, 00:16:07.551 "state": "configuring", 00:16:07.551 "raid_level": "raid1", 00:16:07.551 "superblock": true, 00:16:07.551 "num_base_bdevs": 4, 00:16:07.551 "num_base_bdevs_discovered": 3, 00:16:07.551 "num_base_bdevs_operational": 4, 00:16:07.551 "base_bdevs_list": [ 00:16:07.551 { 00:16:07.551 "name": "BaseBdev1", 00:16:07.551 "uuid": "2aa0d7f1-4226-11ef-aa83-81fbc7dfef58", 00:16:07.551 "is_configured": true, 00:16:07.551 "data_offset": 2048, 00:16:07.551 "data_size": 63488 00:16:07.551 }, 00:16:07.551 { 00:16:07.551 "name": null, 00:16:07.551 "uuid": "286feb85-4226-11ef-aa83-81fbc7dfef58", 00:16:07.551 "is_configured": false, 00:16:07.551 "data_offset": 2048, 00:16:07.551 "data_size": 63488 00:16:07.551 }, 00:16:07.551 { 00:16:07.551 "name": "BaseBdev3", 00:16:07.551 "uuid": "28d53c81-4226-11ef-aa83-81fbc7dfef58", 00:16:07.551 "is_configured": true, 00:16:07.551 "data_offset": 2048, 00:16:07.551 "data_size": 63488 00:16:07.551 }, 00:16:07.551 { 00:16:07.551 "name": "BaseBdev4", 00:16:07.551 "uuid": "2935104a-4226-11ef-aa83-81fbc7dfef58", 00:16:07.551 "is_configured": true, 00:16:07.551 "data_offset": 2048, 00:16:07.551 "data_size": 63488 00:16:07.551 } 00:16:07.551 ] 00:16:07.551 }' 00:16:07.551 21:15:18 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:07.551 21:15:18 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:07.810 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:07.810 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # jq '.[0].base_bdevs_list[2].is_configured' 00:16:08.069 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@323 -- # [[ true == \t\r\u\e ]] 00:16:08.069 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@325 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:08.328 [2024-07-14 21:15:19.670918] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:08.328 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@326 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:08.328 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:08.328 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:08.328 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:08.328 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:08.328 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:08.328 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:08.328 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:08.328 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:08.328 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:08.328 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.328 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:08.587 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:08.587 "name": "Existed_Raid", 00:16:08.587 "uuid": "29a425e6-4226-11ef-aa83-81fbc7dfef58", 00:16:08.587 "strip_size_kb": 0, 00:16:08.587 "state": "configuring", 00:16:08.587 "raid_level": "raid1", 00:16:08.587 "superblock": true, 00:16:08.587 "num_base_bdevs": 4, 00:16:08.587 "num_base_bdevs_discovered": 2, 00:16:08.587 "num_base_bdevs_operational": 4, 00:16:08.587 "base_bdevs_list": [ 00:16:08.587 { 00:16:08.587 "name": null, 00:16:08.587 "uuid": "2aa0d7f1-4226-11ef-aa83-81fbc7dfef58", 00:16:08.587 "is_configured": false, 00:16:08.587 "data_offset": 2048, 00:16:08.587 "data_size": 63488 00:16:08.587 }, 00:16:08.587 { 00:16:08.587 "name": null, 00:16:08.587 "uuid": "286feb85-4226-11ef-aa83-81fbc7dfef58", 00:16:08.587 "is_configured": false, 00:16:08.587 "data_offset": 2048, 00:16:08.587 "data_size": 63488 00:16:08.587 }, 00:16:08.587 { 00:16:08.587 "name": "BaseBdev3", 00:16:08.587 "uuid": "28d53c81-4226-11ef-aa83-81fbc7dfef58", 00:16:08.587 "is_configured": true, 00:16:08.587 "data_offset": 2048, 00:16:08.587 "data_size": 63488 00:16:08.587 }, 00:16:08.587 { 00:16:08.587 "name": "BaseBdev4", 00:16:08.587 "uuid": "2935104a-4226-11ef-aa83-81fbc7dfef58", 00:16:08.587 "is_configured": true, 00:16:08.587 "data_offset": 2048, 00:16:08.587 "data_size": 63488 00:16:08.587 } 00:16:08.587 ] 00:16:08.587 }' 00:16:08.587 21:15:19 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:08.587 21:15:19 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:08.846 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:08.846 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # jq '.[0].base_bdevs_list[0].is_configured' 00:16:09.104 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@327 -- # [[ false == \f\a\l\s\e ]] 00:16:09.104 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@329 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev Existed_Raid BaseBdev2 00:16:09.363 [2024-07-14 21:15:20.775083] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:09.363 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@330 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 4 00:16:09.363 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:09.363 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:09.363 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:09.363 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:09.363 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:09.363 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:09.363 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:09.363 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:09.363 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:09.363 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.363 21:15:20 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:09.621 21:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:09.621 "name": "Existed_Raid", 00:16:09.621 "uuid": "29a425e6-4226-11ef-aa83-81fbc7dfef58", 00:16:09.621 "strip_size_kb": 0, 00:16:09.621 "state": "configuring", 00:16:09.621 "raid_level": "raid1", 00:16:09.621 "superblock": true, 00:16:09.621 "num_base_bdevs": 4, 00:16:09.621 "num_base_bdevs_discovered": 3, 00:16:09.621 "num_base_bdevs_operational": 4, 00:16:09.621 "base_bdevs_list": [ 00:16:09.621 { 00:16:09.621 "name": null, 00:16:09.621 "uuid": "2aa0d7f1-4226-11ef-aa83-81fbc7dfef58", 00:16:09.621 "is_configured": false, 00:16:09.621 "data_offset": 2048, 00:16:09.621 "data_size": 63488 00:16:09.621 }, 00:16:09.621 { 00:16:09.621 "name": "BaseBdev2", 00:16:09.621 "uuid": "286feb85-4226-11ef-aa83-81fbc7dfef58", 00:16:09.621 "is_configured": true, 00:16:09.621 "data_offset": 2048, 00:16:09.621 "data_size": 63488 00:16:09.621 }, 00:16:09.621 { 00:16:09.621 "name": "BaseBdev3", 00:16:09.621 "uuid": "28d53c81-4226-11ef-aa83-81fbc7dfef58", 00:16:09.621 "is_configured": true, 00:16:09.621 "data_offset": 2048, 00:16:09.621 "data_size": 63488 00:16:09.621 }, 00:16:09.621 { 00:16:09.621 "name": "BaseBdev4", 00:16:09.621 "uuid": "2935104a-4226-11ef-aa83-81fbc7dfef58", 00:16:09.621 "is_configured": true, 00:16:09.621 "data_offset": 2048, 00:16:09.621 "data_size": 63488 00:16:09.621 } 00:16:09.621 ] 00:16:09.621 }' 00:16:09.622 21:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:09.622 21:15:21 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:09.880 21:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:09.880 21:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # jq '.[0].base_bdevs_list[1].is_configured' 00:16:10.139 21:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@331 -- # [[ true == \t\r\u\e ]] 00:16:10.139 21:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:10.139 21:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # jq -r '.[0].base_bdevs_list[0].uuid' 00:16:10.398 21:15:21 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@333 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b NewBaseBdev -u 2aa0d7f1-4226-11ef-aa83-81fbc7dfef58 00:16:10.657 [2024-07-14 21:15:22.055214] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev NewBaseBdev is claimed 00:16:10.657 [2024-07-14 21:15:22.055261] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x22f54b834f00 00:16:10.657 [2024-07-14 21:15:22.055266] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:10.657 [2024-07-14 21:15:22.055285] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x22f54b897e20 00:16:10.657 [2024-07-14 21:15:22.055341] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x22f54b834f00 00:16:10.657 [2024-07-14 21:15:22.055345] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x22f54b834f00 00:16:10.657 [2024-07-14 21:15:22.055365] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:10.657 NewBaseBdev 00:16:10.657 21:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@334 -- # waitforbdev NewBaseBdev 00:16:10.657 21:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@897 -- # local bdev_name=NewBaseBdev 00:16:10.657 21:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:10.657 21:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@899 -- # local i 00:16:10.657 21:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:10.657 21:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:10.657 21:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:10.916 21:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev -t 2000 00:16:11.175 [ 00:16:11.175 { 00:16:11.175 "name": "NewBaseBdev", 00:16:11.175 "aliases": [ 00:16:11.175 "2aa0d7f1-4226-11ef-aa83-81fbc7dfef58" 00:16:11.175 ], 00:16:11.175 "product_name": "Malloc disk", 00:16:11.175 "block_size": 512, 00:16:11.175 "num_blocks": 65536, 00:16:11.175 "uuid": "2aa0d7f1-4226-11ef-aa83-81fbc7dfef58", 00:16:11.175 "assigned_rate_limits": { 00:16:11.175 "rw_ios_per_sec": 0, 00:16:11.175 "rw_mbytes_per_sec": 0, 00:16:11.175 "r_mbytes_per_sec": 0, 00:16:11.175 "w_mbytes_per_sec": 0 00:16:11.175 }, 00:16:11.175 "claimed": true, 00:16:11.175 "claim_type": "exclusive_write", 00:16:11.175 "zoned": false, 00:16:11.175 "supported_io_types": { 00:16:11.175 "read": true, 00:16:11.175 "write": true, 00:16:11.175 "unmap": true, 00:16:11.175 "flush": true, 00:16:11.175 "reset": true, 00:16:11.175 "nvme_admin": false, 00:16:11.175 "nvme_io": false, 00:16:11.175 "nvme_io_md": false, 00:16:11.175 "write_zeroes": true, 00:16:11.175 "zcopy": true, 00:16:11.175 "get_zone_info": false, 00:16:11.175 "zone_management": false, 00:16:11.175 "zone_append": false, 00:16:11.175 "compare": false, 00:16:11.175 "compare_and_write": false, 00:16:11.175 "abort": true, 00:16:11.175 "seek_hole": false, 00:16:11.175 "seek_data": false, 00:16:11.175 "copy": true, 00:16:11.175 "nvme_iov_md": false 00:16:11.175 }, 00:16:11.175 "memory_domains": [ 00:16:11.175 { 00:16:11.175 "dma_device_id": "system", 00:16:11.175 "dma_device_type": 1 00:16:11.175 }, 00:16:11.175 { 00:16:11.175 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.175 "dma_device_type": 2 00:16:11.175 } 00:16:11.175 ], 00:16:11.175 "driver_specific": {} 00:16:11.175 } 00:16:11.175 ] 00:16:11.175 21:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@905 -- # return 0 00:16:11.175 21:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@335 -- # verify_raid_bdev_state Existed_Raid online raid1 0 4 00:16:11.175 21:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:11.175 21:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:11.175 21:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:11.175 21:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:11.175 21:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:11.175 21:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:11.175 21:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:11.175 21:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:11.175 21:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:11.175 21:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:11.175 21:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:11.435 21:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:11.435 "name": "Existed_Raid", 00:16:11.435 "uuid": "29a425e6-4226-11ef-aa83-81fbc7dfef58", 00:16:11.435 "strip_size_kb": 0, 00:16:11.435 "state": "online", 00:16:11.435 "raid_level": "raid1", 00:16:11.435 "superblock": true, 00:16:11.435 "num_base_bdevs": 4, 00:16:11.435 "num_base_bdevs_discovered": 4, 00:16:11.435 "num_base_bdevs_operational": 4, 00:16:11.435 "base_bdevs_list": [ 00:16:11.435 { 00:16:11.435 "name": "NewBaseBdev", 00:16:11.435 "uuid": "2aa0d7f1-4226-11ef-aa83-81fbc7dfef58", 00:16:11.435 "is_configured": true, 00:16:11.435 "data_offset": 2048, 00:16:11.435 "data_size": 63488 00:16:11.435 }, 00:16:11.435 { 00:16:11.435 "name": "BaseBdev2", 00:16:11.435 "uuid": "286feb85-4226-11ef-aa83-81fbc7dfef58", 00:16:11.435 "is_configured": true, 00:16:11.435 "data_offset": 2048, 00:16:11.435 "data_size": 63488 00:16:11.435 }, 00:16:11.435 { 00:16:11.435 "name": "BaseBdev3", 00:16:11.435 "uuid": "28d53c81-4226-11ef-aa83-81fbc7dfef58", 00:16:11.435 "is_configured": true, 00:16:11.435 "data_offset": 2048, 00:16:11.435 "data_size": 63488 00:16:11.435 }, 00:16:11.435 { 00:16:11.435 "name": "BaseBdev4", 00:16:11.435 "uuid": "2935104a-4226-11ef-aa83-81fbc7dfef58", 00:16:11.435 "is_configured": true, 00:16:11.435 "data_offset": 2048, 00:16:11.435 "data_size": 63488 00:16:11.435 } 00:16:11.435 ] 00:16:11.435 }' 00:16:11.435 21:15:22 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:11.435 21:15:22 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:11.694 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@336 -- # verify_raid_bdev_properties Existed_Raid 00:16:11.694 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:11.694 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:11.694 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:11.694 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:11.694 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@198 -- # local name 00:16:11.694 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:11.694 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:11.953 [2024-07-14 21:15:23.303154] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:11.953 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:11.953 "name": "Existed_Raid", 00:16:11.953 "aliases": [ 00:16:11.953 "29a425e6-4226-11ef-aa83-81fbc7dfef58" 00:16:11.953 ], 00:16:11.953 "product_name": "Raid Volume", 00:16:11.953 "block_size": 512, 00:16:11.953 "num_blocks": 63488, 00:16:11.953 "uuid": "29a425e6-4226-11ef-aa83-81fbc7dfef58", 00:16:11.953 "assigned_rate_limits": { 00:16:11.953 "rw_ios_per_sec": 0, 00:16:11.953 "rw_mbytes_per_sec": 0, 00:16:11.953 "r_mbytes_per_sec": 0, 00:16:11.953 "w_mbytes_per_sec": 0 00:16:11.953 }, 00:16:11.953 "claimed": false, 00:16:11.953 "zoned": false, 00:16:11.953 "supported_io_types": { 00:16:11.953 "read": true, 00:16:11.953 "write": true, 00:16:11.953 "unmap": false, 00:16:11.953 "flush": false, 00:16:11.953 "reset": true, 00:16:11.953 "nvme_admin": false, 00:16:11.953 "nvme_io": false, 00:16:11.953 "nvme_io_md": false, 00:16:11.953 "write_zeroes": true, 00:16:11.953 "zcopy": false, 00:16:11.953 "get_zone_info": false, 00:16:11.953 "zone_management": false, 00:16:11.953 "zone_append": false, 00:16:11.953 "compare": false, 00:16:11.953 "compare_and_write": false, 00:16:11.953 "abort": false, 00:16:11.953 "seek_hole": false, 00:16:11.953 "seek_data": false, 00:16:11.953 "copy": false, 00:16:11.953 "nvme_iov_md": false 00:16:11.953 }, 00:16:11.953 "memory_domains": [ 00:16:11.953 { 00:16:11.953 "dma_device_id": "system", 00:16:11.953 "dma_device_type": 1 00:16:11.953 }, 00:16:11.953 { 00:16:11.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.953 "dma_device_type": 2 00:16:11.953 }, 00:16:11.953 { 00:16:11.953 "dma_device_id": "system", 00:16:11.953 "dma_device_type": 1 00:16:11.953 }, 00:16:11.953 { 00:16:11.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.953 "dma_device_type": 2 00:16:11.953 }, 00:16:11.953 { 00:16:11.953 "dma_device_id": "system", 00:16:11.953 "dma_device_type": 1 00:16:11.953 }, 00:16:11.953 { 00:16:11.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.953 "dma_device_type": 2 00:16:11.953 }, 00:16:11.953 { 00:16:11.953 "dma_device_id": "system", 00:16:11.953 "dma_device_type": 1 00:16:11.953 }, 00:16:11.953 { 00:16:11.953 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:11.953 "dma_device_type": 2 00:16:11.953 } 00:16:11.953 ], 00:16:11.953 "driver_specific": { 00:16:11.953 "raid": { 00:16:11.953 "uuid": "29a425e6-4226-11ef-aa83-81fbc7dfef58", 00:16:11.953 "strip_size_kb": 0, 00:16:11.953 "state": "online", 00:16:11.953 "raid_level": "raid1", 00:16:11.953 "superblock": true, 00:16:11.953 "num_base_bdevs": 4, 00:16:11.953 "num_base_bdevs_discovered": 4, 00:16:11.953 "num_base_bdevs_operational": 4, 00:16:11.953 "base_bdevs_list": [ 00:16:11.953 { 00:16:11.953 "name": "NewBaseBdev", 00:16:11.953 "uuid": "2aa0d7f1-4226-11ef-aa83-81fbc7dfef58", 00:16:11.953 "is_configured": true, 00:16:11.953 "data_offset": 2048, 00:16:11.953 "data_size": 63488 00:16:11.953 }, 00:16:11.953 { 00:16:11.953 "name": "BaseBdev2", 00:16:11.953 "uuid": "286feb85-4226-11ef-aa83-81fbc7dfef58", 00:16:11.953 "is_configured": true, 00:16:11.953 "data_offset": 2048, 00:16:11.953 "data_size": 63488 00:16:11.953 }, 00:16:11.953 { 00:16:11.953 "name": "BaseBdev3", 00:16:11.953 "uuid": "28d53c81-4226-11ef-aa83-81fbc7dfef58", 00:16:11.953 "is_configured": true, 00:16:11.953 "data_offset": 2048, 00:16:11.953 "data_size": 63488 00:16:11.953 }, 00:16:11.953 { 00:16:11.953 "name": "BaseBdev4", 00:16:11.953 "uuid": "2935104a-4226-11ef-aa83-81fbc7dfef58", 00:16:11.953 "is_configured": true, 00:16:11.953 "data_offset": 2048, 00:16:11.953 "data_size": 63488 00:16:11.953 } 00:16:11.953 ] 00:16:11.953 } 00:16:11.953 } 00:16:11.953 }' 00:16:11.953 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:11.953 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@201 -- # base_bdev_names='NewBaseBdev 00:16:11.953 BaseBdev2 00:16:11.953 BaseBdev3 00:16:11.953 BaseBdev4' 00:16:11.953 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:11.953 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b NewBaseBdev 00:16:11.953 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:12.211 "name": "NewBaseBdev", 00:16:12.211 "aliases": [ 00:16:12.211 "2aa0d7f1-4226-11ef-aa83-81fbc7dfef58" 00:16:12.211 ], 00:16:12.211 "product_name": "Malloc disk", 00:16:12.211 "block_size": 512, 00:16:12.211 "num_blocks": 65536, 00:16:12.211 "uuid": "2aa0d7f1-4226-11ef-aa83-81fbc7dfef58", 00:16:12.211 "assigned_rate_limits": { 00:16:12.211 "rw_ios_per_sec": 0, 00:16:12.211 "rw_mbytes_per_sec": 0, 00:16:12.211 "r_mbytes_per_sec": 0, 00:16:12.211 "w_mbytes_per_sec": 0 00:16:12.211 }, 00:16:12.211 "claimed": true, 00:16:12.211 "claim_type": "exclusive_write", 00:16:12.211 "zoned": false, 00:16:12.211 "supported_io_types": { 00:16:12.211 "read": true, 00:16:12.211 "write": true, 00:16:12.211 "unmap": true, 00:16:12.211 "flush": true, 00:16:12.211 "reset": true, 00:16:12.211 "nvme_admin": false, 00:16:12.211 "nvme_io": false, 00:16:12.211 "nvme_io_md": false, 00:16:12.211 "write_zeroes": true, 00:16:12.211 "zcopy": true, 00:16:12.211 "get_zone_info": false, 00:16:12.211 "zone_management": false, 00:16:12.211 "zone_append": false, 00:16:12.211 "compare": false, 00:16:12.211 "compare_and_write": false, 00:16:12.211 "abort": true, 00:16:12.211 "seek_hole": false, 00:16:12.211 "seek_data": false, 00:16:12.211 "copy": true, 00:16:12.211 "nvme_iov_md": false 00:16:12.211 }, 00:16:12.211 "memory_domains": [ 00:16:12.211 { 00:16:12.211 "dma_device_id": "system", 00:16:12.211 "dma_device_type": 1 00:16:12.211 }, 00:16:12.211 { 00:16:12.211 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.211 "dma_device_type": 2 00:16:12.211 } 00:16:12.211 ], 00:16:12.211 "driver_specific": {} 00:16:12.211 }' 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:12.211 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:12.471 "name": "BaseBdev2", 00:16:12.471 "aliases": [ 00:16:12.471 "286feb85-4226-11ef-aa83-81fbc7dfef58" 00:16:12.471 ], 00:16:12.471 "product_name": "Malloc disk", 00:16:12.471 "block_size": 512, 00:16:12.471 "num_blocks": 65536, 00:16:12.471 "uuid": "286feb85-4226-11ef-aa83-81fbc7dfef58", 00:16:12.471 "assigned_rate_limits": { 00:16:12.471 "rw_ios_per_sec": 0, 00:16:12.471 "rw_mbytes_per_sec": 0, 00:16:12.471 "r_mbytes_per_sec": 0, 00:16:12.471 "w_mbytes_per_sec": 0 00:16:12.471 }, 00:16:12.471 "claimed": true, 00:16:12.471 "claim_type": "exclusive_write", 00:16:12.471 "zoned": false, 00:16:12.471 "supported_io_types": { 00:16:12.471 "read": true, 00:16:12.471 "write": true, 00:16:12.471 "unmap": true, 00:16:12.471 "flush": true, 00:16:12.471 "reset": true, 00:16:12.471 "nvme_admin": false, 00:16:12.471 "nvme_io": false, 00:16:12.471 "nvme_io_md": false, 00:16:12.471 "write_zeroes": true, 00:16:12.471 "zcopy": true, 00:16:12.471 "get_zone_info": false, 00:16:12.471 "zone_management": false, 00:16:12.471 "zone_append": false, 00:16:12.471 "compare": false, 00:16:12.471 "compare_and_write": false, 00:16:12.471 "abort": true, 00:16:12.471 "seek_hole": false, 00:16:12.471 "seek_data": false, 00:16:12.471 "copy": true, 00:16:12.471 "nvme_iov_md": false 00:16:12.471 }, 00:16:12.471 "memory_domains": [ 00:16:12.471 { 00:16:12.471 "dma_device_id": "system", 00:16:12.471 "dma_device_type": 1 00:16:12.471 }, 00:16:12.471 { 00:16:12.471 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.471 "dma_device_type": 2 00:16:12.471 } 00:16:12.471 ], 00:16:12.471 "driver_specific": {} 00:16:12.471 }' 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev3 00:16:12.471 21:15:23 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:12.730 "name": "BaseBdev3", 00:16:12.730 "aliases": [ 00:16:12.730 "28d53c81-4226-11ef-aa83-81fbc7dfef58" 00:16:12.730 ], 00:16:12.730 "product_name": "Malloc disk", 00:16:12.730 "block_size": 512, 00:16:12.730 "num_blocks": 65536, 00:16:12.730 "uuid": "28d53c81-4226-11ef-aa83-81fbc7dfef58", 00:16:12.730 "assigned_rate_limits": { 00:16:12.730 "rw_ios_per_sec": 0, 00:16:12.730 "rw_mbytes_per_sec": 0, 00:16:12.730 "r_mbytes_per_sec": 0, 00:16:12.730 "w_mbytes_per_sec": 0 00:16:12.730 }, 00:16:12.730 "claimed": true, 00:16:12.730 "claim_type": "exclusive_write", 00:16:12.730 "zoned": false, 00:16:12.730 "supported_io_types": { 00:16:12.730 "read": true, 00:16:12.730 "write": true, 00:16:12.730 "unmap": true, 00:16:12.730 "flush": true, 00:16:12.730 "reset": true, 00:16:12.730 "nvme_admin": false, 00:16:12.730 "nvme_io": false, 00:16:12.730 "nvme_io_md": false, 00:16:12.730 "write_zeroes": true, 00:16:12.730 "zcopy": true, 00:16:12.730 "get_zone_info": false, 00:16:12.730 "zone_management": false, 00:16:12.730 "zone_append": false, 00:16:12.730 "compare": false, 00:16:12.730 "compare_and_write": false, 00:16:12.730 "abort": true, 00:16:12.730 "seek_hole": false, 00:16:12.730 "seek_data": false, 00:16:12.730 "copy": true, 00:16:12.730 "nvme_iov_md": false 00:16:12.730 }, 00:16:12.730 "memory_domains": [ 00:16:12.730 { 00:16:12.730 "dma_device_id": "system", 00:16:12.730 "dma_device_type": 1 00:16:12.730 }, 00:16:12.730 { 00:16:12.730 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.730 "dma_device_type": 2 00:16:12.730 } 00:16:12.730 ], 00:16:12.730 "driver_specific": {} 00:16:12.730 }' 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:12.730 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev4 00:16:12.988 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:12.988 "name": "BaseBdev4", 00:16:12.988 "aliases": [ 00:16:12.988 "2935104a-4226-11ef-aa83-81fbc7dfef58" 00:16:12.988 ], 00:16:12.988 "product_name": "Malloc disk", 00:16:12.988 "block_size": 512, 00:16:12.988 "num_blocks": 65536, 00:16:12.988 "uuid": "2935104a-4226-11ef-aa83-81fbc7dfef58", 00:16:12.988 "assigned_rate_limits": { 00:16:12.988 "rw_ios_per_sec": 0, 00:16:12.988 "rw_mbytes_per_sec": 0, 00:16:12.988 "r_mbytes_per_sec": 0, 00:16:12.988 "w_mbytes_per_sec": 0 00:16:12.988 }, 00:16:12.988 "claimed": true, 00:16:12.988 "claim_type": "exclusive_write", 00:16:12.988 "zoned": false, 00:16:12.988 "supported_io_types": { 00:16:12.988 "read": true, 00:16:12.988 "write": true, 00:16:12.988 "unmap": true, 00:16:12.988 "flush": true, 00:16:12.988 "reset": true, 00:16:12.988 "nvme_admin": false, 00:16:12.988 "nvme_io": false, 00:16:12.988 "nvme_io_md": false, 00:16:12.988 "write_zeroes": true, 00:16:12.988 "zcopy": true, 00:16:12.988 "get_zone_info": false, 00:16:12.988 "zone_management": false, 00:16:12.988 "zone_append": false, 00:16:12.988 "compare": false, 00:16:12.988 "compare_and_write": false, 00:16:12.988 "abort": true, 00:16:12.988 "seek_hole": false, 00:16:12.988 "seek_data": false, 00:16:12.988 "copy": true, 00:16:12.988 "nvme_iov_md": false 00:16:12.988 }, 00:16:12.988 "memory_domains": [ 00:16:12.988 { 00:16:12.988 "dma_device_id": "system", 00:16:12.988 "dma_device_type": 1 00:16:12.988 }, 00:16:12.988 { 00:16:12.988 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:12.988 "dma_device_type": 2 00:16:12.988 } 00:16:12.988 ], 00:16:12.988 "driver_specific": {} 00:16:12.988 }' 00:16:12.989 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:13.247 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:13.247 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:13.247 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:13.247 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:13.247 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:13.247 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:13.247 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:13.247 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:13.247 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:13.247 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:13.247 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:13.247 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@338 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:13.506 [2024-07-14 21:15:24.863129] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:13.506 [2024-07-14 21:15:24.863143] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:13.506 [2024-07-14 21:15:24.863174] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:13.506 [2024-07-14 21:15:24.863265] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:13.506 [2024-07-14 21:15:24.863269] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x22f54b834f00 name Existed_Raid, state offline 00:16:13.506 21:15:24 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@341 -- # killprocess 63685 00:16:13.506 21:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@948 -- # '[' -z 63685 ']' 00:16:13.506 21:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@952 -- # kill -0 63685 00:16:13.506 21:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # uname 00:16:13.506 21:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:13.506 21:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # ps -c -o command 63685 00:16:13.506 21:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # tail -1 00:16:13.506 21:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:16:13.506 21:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:16:13.506 21:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@966 -- # echo 'killing process with pid 63685' 00:16:13.506 killing process with pid 63685 00:16:13.506 21:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@967 -- # kill 63685 00:16:13.506 [2024-07-14 21:15:24.894258] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:13.506 21:15:24 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@972 -- # wait 63685 00:16:13.506 [2024-07-14 21:15:24.929089] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:13.765 21:15:25 bdev_raid.raid_state_function_test_sb -- bdev/bdev_raid.sh@343 -- # return 0 00:16:13.765 00:16:13.765 real 0m25.016s 00:16:13.765 user 0m45.196s 00:16:13.765 sys 0m3.901s 00:16:13.765 21:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:13.765 21:15:25 bdev_raid.raid_state_function_test_sb -- common/autotest_common.sh@10 -- # set +x 00:16:13.765 ************************************ 00:16:13.765 END TEST raid_state_function_test_sb 00:16:13.765 ************************************ 00:16:13.765 21:15:25 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:13.765 21:15:25 bdev_raid -- bdev/bdev_raid.sh@869 -- # run_test raid_superblock_test raid_superblock_test raid1 4 00:16:13.765 21:15:25 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:13.765 21:15:25 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:13.765 21:15:25 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:13.765 ************************************ 00:16:13.765 START TEST raid_superblock_test 00:16:13.765 ************************************ 00:16:13.765 21:15:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 4 00:16:13.765 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:16:13.765 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=4 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@411 -- # raid_pid=64495 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@412 -- # waitforlisten 64495 /var/tmp/spdk-raid.sock 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@829 -- # '[' -z 64495 ']' 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:13.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:13.766 21:15:25 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:13.766 [2024-07-14 21:15:25.243729] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:13.766 [2024-07-14 21:15:25.244009] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:14.332 EAL: TSC is not safe to use in SMP mode 00:16:14.332 EAL: TSC is not invariant 00:16:14.332 [2024-07-14 21:15:25.759099] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:14.332 [2024-07-14 21:15:25.865072] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:14.332 [2024-07-14 21:15:25.867711] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:14.332 [2024-07-14 21:15:25.868733] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.332 [2024-07-14 21:15:25.868750] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:14.898 21:15:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:14.898 21:15:26 bdev_raid.raid_superblock_test -- common/autotest_common.sh@862 -- # return 0 00:16:14.898 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:14.898 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:14.898 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:14.898 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:14.898 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:14.898 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:14.898 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:14.898 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:14.898 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc1 00:16:15.156 malloc1 00:16:15.156 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:15.414 [2024-07-14 21:15:26.747530] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:15.414 [2024-07-14 21:15:26.747593] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.414 [2024-07-14 21:15:26.747620] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a87c634780 00:16:15.414 [2024-07-14 21:15:26.747627] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.414 [2024-07-14 21:15:26.748702] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.414 [2024-07-14 21:15:26.748730] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:15.414 pt1 00:16:15.414 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:15.414 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:15.414 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:15.414 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:15.414 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:15.414 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.414 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.414 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.414 21:15:26 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc2 00:16:15.672 malloc2 00:16:15.672 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:15.930 [2024-07-14 21:15:27.255566] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:15.930 [2024-07-14 21:15:27.255628] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:15.930 [2024-07-14 21:15:27.255654] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a87c634c80 00:16:15.930 [2024-07-14 21:15:27.255662] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:15.930 [2024-07-14 21:15:27.256356] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:15.930 [2024-07-14 21:15:27.256383] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:15.930 pt2 00:16:15.930 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:15.930 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:15.930 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc3 00:16:15.930 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt3 00:16:15.930 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000003 00:16:15.930 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:15.930 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:15.930 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:15.930 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc3 00:16:16.189 malloc3 00:16:16.189 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:16.189 [2024-07-14 21:15:27.723613] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:16.189 [2024-07-14 21:15:27.723676] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.189 [2024-07-14 21:15:27.723703] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a87c635180 00:16:16.189 [2024-07-14 21:15:27.723710] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.189 [2024-07-14 21:15:27.724437] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.189 [2024-07-14 21:15:27.724471] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:16.189 pt3 00:16:16.447 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:16.447 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:16.447 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc4 00:16:16.447 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt4 00:16:16.447 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000004 00:16:16.447 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:16.447 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:16.447 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:16.447 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b malloc4 00:16:16.447 malloc4 00:16:16.447 21:15:27 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:16.705 [2024-07-14 21:15:28.251627] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:16.705 [2024-07-14 21:15:28.251680] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:16.705 [2024-07-14 21:15:28.251707] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a87c635680 00:16:16.705 [2024-07-14 21:15:28.251714] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:16.705 [2024-07-14 21:15:28.252314] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:16.705 [2024-07-14 21:15:28.252336] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:16.963 pt4 00:16:16.963 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:16.963 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:16.963 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2 pt3 pt4' -n raid_bdev1 -s 00:16:16.963 [2024-07-14 21:15:28.463645] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:16.963 [2024-07-14 21:15:28.464280] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:16.963 [2024-07-14 21:15:28.464302] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:16.963 [2024-07-14 21:15:28.464314] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:16.963 [2024-07-14 21:15:28.464367] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10a87c635900 00:16:16.963 [2024-07-14 21:15:28.464373] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:16.963 [2024-07-14 21:15:28.464405] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10a87c697e20 00:16:16.963 [2024-07-14 21:15:28.464496] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10a87c635900 00:16:16.963 [2024-07-14 21:15:28.464501] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10a87c635900 00:16:16.963 [2024-07-14 21:15:28.464542] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:16.963 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:16.963 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:16.963 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:16.963 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:16.963 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:16.963 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:16.963 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:16.963 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:16.963 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:16.963 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:16.963 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:16.963 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:17.221 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:17.221 "name": "raid_bdev1", 00:16:17.221 "uuid": "31de96cc-4226-11ef-aa83-81fbc7dfef58", 00:16:17.221 "strip_size_kb": 0, 00:16:17.221 "state": "online", 00:16:17.221 "raid_level": "raid1", 00:16:17.221 "superblock": true, 00:16:17.221 "num_base_bdevs": 4, 00:16:17.221 "num_base_bdevs_discovered": 4, 00:16:17.221 "num_base_bdevs_operational": 4, 00:16:17.221 "base_bdevs_list": [ 00:16:17.221 { 00:16:17.221 "name": "pt1", 00:16:17.221 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.221 "is_configured": true, 00:16:17.221 "data_offset": 2048, 00:16:17.221 "data_size": 63488 00:16:17.221 }, 00:16:17.221 { 00:16:17.221 "name": "pt2", 00:16:17.221 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.221 "is_configured": true, 00:16:17.221 "data_offset": 2048, 00:16:17.221 "data_size": 63488 00:16:17.221 }, 00:16:17.221 { 00:16:17.221 "name": "pt3", 00:16:17.221 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.222 "is_configured": true, 00:16:17.222 "data_offset": 2048, 00:16:17.222 "data_size": 63488 00:16:17.222 }, 00:16:17.222 { 00:16:17.222 "name": "pt4", 00:16:17.222 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.222 "is_configured": true, 00:16:17.222 "data_offset": 2048, 00:16:17.222 "data_size": 63488 00:16:17.222 } 00:16:17.222 ] 00:16:17.222 }' 00:16:17.222 21:15:28 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:17.222 21:15:28 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:17.480 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:17.480 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:17.480 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:17.480 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:17.480 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:17.480 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:17.480 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:17.480 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:17.738 [2024-07-14 21:15:29.219708] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:17.738 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:17.738 "name": "raid_bdev1", 00:16:17.738 "aliases": [ 00:16:17.738 "31de96cc-4226-11ef-aa83-81fbc7dfef58" 00:16:17.738 ], 00:16:17.738 "product_name": "Raid Volume", 00:16:17.738 "block_size": 512, 00:16:17.738 "num_blocks": 63488, 00:16:17.738 "uuid": "31de96cc-4226-11ef-aa83-81fbc7dfef58", 00:16:17.738 "assigned_rate_limits": { 00:16:17.738 "rw_ios_per_sec": 0, 00:16:17.738 "rw_mbytes_per_sec": 0, 00:16:17.738 "r_mbytes_per_sec": 0, 00:16:17.738 "w_mbytes_per_sec": 0 00:16:17.738 }, 00:16:17.738 "claimed": false, 00:16:17.738 "zoned": false, 00:16:17.738 "supported_io_types": { 00:16:17.738 "read": true, 00:16:17.738 "write": true, 00:16:17.738 "unmap": false, 00:16:17.738 "flush": false, 00:16:17.738 "reset": true, 00:16:17.738 "nvme_admin": false, 00:16:17.738 "nvme_io": false, 00:16:17.738 "nvme_io_md": false, 00:16:17.738 "write_zeroes": true, 00:16:17.738 "zcopy": false, 00:16:17.738 "get_zone_info": false, 00:16:17.738 "zone_management": false, 00:16:17.738 "zone_append": false, 00:16:17.738 "compare": false, 00:16:17.738 "compare_and_write": false, 00:16:17.738 "abort": false, 00:16:17.738 "seek_hole": false, 00:16:17.738 "seek_data": false, 00:16:17.738 "copy": false, 00:16:17.739 "nvme_iov_md": false 00:16:17.739 }, 00:16:17.739 "memory_domains": [ 00:16:17.739 { 00:16:17.739 "dma_device_id": "system", 00:16:17.739 "dma_device_type": 1 00:16:17.739 }, 00:16:17.739 { 00:16:17.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.739 "dma_device_type": 2 00:16:17.739 }, 00:16:17.739 { 00:16:17.739 "dma_device_id": "system", 00:16:17.739 "dma_device_type": 1 00:16:17.739 }, 00:16:17.739 { 00:16:17.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.739 "dma_device_type": 2 00:16:17.739 }, 00:16:17.739 { 00:16:17.739 "dma_device_id": "system", 00:16:17.739 "dma_device_type": 1 00:16:17.739 }, 00:16:17.739 { 00:16:17.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.739 "dma_device_type": 2 00:16:17.739 }, 00:16:17.739 { 00:16:17.739 "dma_device_id": "system", 00:16:17.739 "dma_device_type": 1 00:16:17.739 }, 00:16:17.739 { 00:16:17.739 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.739 "dma_device_type": 2 00:16:17.739 } 00:16:17.739 ], 00:16:17.739 "driver_specific": { 00:16:17.739 "raid": { 00:16:17.739 "uuid": "31de96cc-4226-11ef-aa83-81fbc7dfef58", 00:16:17.739 "strip_size_kb": 0, 00:16:17.739 "state": "online", 00:16:17.739 "raid_level": "raid1", 00:16:17.739 "superblock": true, 00:16:17.739 "num_base_bdevs": 4, 00:16:17.739 "num_base_bdevs_discovered": 4, 00:16:17.739 "num_base_bdevs_operational": 4, 00:16:17.739 "base_bdevs_list": [ 00:16:17.739 { 00:16:17.739 "name": "pt1", 00:16:17.739 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.739 "is_configured": true, 00:16:17.739 "data_offset": 2048, 00:16:17.739 "data_size": 63488 00:16:17.739 }, 00:16:17.739 { 00:16:17.739 "name": "pt2", 00:16:17.739 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:17.739 "is_configured": true, 00:16:17.739 "data_offset": 2048, 00:16:17.739 "data_size": 63488 00:16:17.739 }, 00:16:17.739 { 00:16:17.739 "name": "pt3", 00:16:17.739 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:17.739 "is_configured": true, 00:16:17.739 "data_offset": 2048, 00:16:17.739 "data_size": 63488 00:16:17.739 }, 00:16:17.739 { 00:16:17.739 "name": "pt4", 00:16:17.739 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:17.739 "is_configured": true, 00:16:17.739 "data_offset": 2048, 00:16:17.739 "data_size": 63488 00:16:17.739 } 00:16:17.739 ] 00:16:17.739 } 00:16:17.739 } 00:16:17.739 }' 00:16:17.739 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:17.739 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:17.739 pt2 00:16:17.739 pt3 00:16:17.739 pt4' 00:16:17.739 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:17.739 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:17.739 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:17.996 "name": "pt1", 00:16:17.996 "aliases": [ 00:16:17.996 "00000000-0000-0000-0000-000000000001" 00:16:17.996 ], 00:16:17.996 "product_name": "passthru", 00:16:17.996 "block_size": 512, 00:16:17.996 "num_blocks": 65536, 00:16:17.996 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:17.996 "assigned_rate_limits": { 00:16:17.996 "rw_ios_per_sec": 0, 00:16:17.996 "rw_mbytes_per_sec": 0, 00:16:17.996 "r_mbytes_per_sec": 0, 00:16:17.996 "w_mbytes_per_sec": 0 00:16:17.996 }, 00:16:17.996 "claimed": true, 00:16:17.996 "claim_type": "exclusive_write", 00:16:17.996 "zoned": false, 00:16:17.996 "supported_io_types": { 00:16:17.996 "read": true, 00:16:17.996 "write": true, 00:16:17.996 "unmap": true, 00:16:17.996 "flush": true, 00:16:17.996 "reset": true, 00:16:17.996 "nvme_admin": false, 00:16:17.996 "nvme_io": false, 00:16:17.996 "nvme_io_md": false, 00:16:17.996 "write_zeroes": true, 00:16:17.996 "zcopy": true, 00:16:17.996 "get_zone_info": false, 00:16:17.996 "zone_management": false, 00:16:17.996 "zone_append": false, 00:16:17.996 "compare": false, 00:16:17.996 "compare_and_write": false, 00:16:17.996 "abort": true, 00:16:17.996 "seek_hole": false, 00:16:17.996 "seek_data": false, 00:16:17.996 "copy": true, 00:16:17.996 "nvme_iov_md": false 00:16:17.996 }, 00:16:17.996 "memory_domains": [ 00:16:17.996 { 00:16:17.996 "dma_device_id": "system", 00:16:17.996 "dma_device_type": 1 00:16:17.996 }, 00:16:17.996 { 00:16:17.996 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:17.996 "dma_device_type": 2 00:16:17.996 } 00:16:17.996 ], 00:16:17.996 "driver_specific": { 00:16:17.996 "passthru": { 00:16:17.996 "name": "pt1", 00:16:17.996 "base_bdev_name": "malloc1" 00:16:17.996 } 00:16:17.996 } 00:16:17.996 }' 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:17.996 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:18.254 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:18.254 "name": "pt2", 00:16:18.254 "aliases": [ 00:16:18.254 "00000000-0000-0000-0000-000000000002" 00:16:18.254 ], 00:16:18.254 "product_name": "passthru", 00:16:18.254 "block_size": 512, 00:16:18.254 "num_blocks": 65536, 00:16:18.254 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:18.254 "assigned_rate_limits": { 00:16:18.254 "rw_ios_per_sec": 0, 00:16:18.254 "rw_mbytes_per_sec": 0, 00:16:18.254 "r_mbytes_per_sec": 0, 00:16:18.254 "w_mbytes_per_sec": 0 00:16:18.254 }, 00:16:18.254 "claimed": true, 00:16:18.254 "claim_type": "exclusive_write", 00:16:18.254 "zoned": false, 00:16:18.254 "supported_io_types": { 00:16:18.254 "read": true, 00:16:18.254 "write": true, 00:16:18.254 "unmap": true, 00:16:18.254 "flush": true, 00:16:18.254 "reset": true, 00:16:18.254 "nvme_admin": false, 00:16:18.254 "nvme_io": false, 00:16:18.254 "nvme_io_md": false, 00:16:18.254 "write_zeroes": true, 00:16:18.254 "zcopy": true, 00:16:18.254 "get_zone_info": false, 00:16:18.254 "zone_management": false, 00:16:18.254 "zone_append": false, 00:16:18.254 "compare": false, 00:16:18.254 "compare_and_write": false, 00:16:18.254 "abort": true, 00:16:18.254 "seek_hole": false, 00:16:18.254 "seek_data": false, 00:16:18.254 "copy": true, 00:16:18.254 "nvme_iov_md": false 00:16:18.254 }, 00:16:18.254 "memory_domains": [ 00:16:18.254 { 00:16:18.254 "dma_device_id": "system", 00:16:18.254 "dma_device_type": 1 00:16:18.254 }, 00:16:18.254 { 00:16:18.254 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.254 "dma_device_type": 2 00:16:18.254 } 00:16:18.254 ], 00:16:18.254 "driver_specific": { 00:16:18.254 "passthru": { 00:16:18.254 "name": "pt2", 00:16:18.254 "base_bdev_name": "malloc2" 00:16:18.254 } 00:16:18.254 } 00:16:18.254 }' 00:16:18.254 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.254 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.254 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:18.254 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.254 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.254 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:18.254 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.254 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.511 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.511 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.511 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.511 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.511 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:18.511 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:18.511 21:15:29 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:18.770 "name": "pt3", 00:16:18.770 "aliases": [ 00:16:18.770 "00000000-0000-0000-0000-000000000003" 00:16:18.770 ], 00:16:18.770 "product_name": "passthru", 00:16:18.770 "block_size": 512, 00:16:18.770 "num_blocks": 65536, 00:16:18.770 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:18.770 "assigned_rate_limits": { 00:16:18.770 "rw_ios_per_sec": 0, 00:16:18.770 "rw_mbytes_per_sec": 0, 00:16:18.770 "r_mbytes_per_sec": 0, 00:16:18.770 "w_mbytes_per_sec": 0 00:16:18.770 }, 00:16:18.770 "claimed": true, 00:16:18.770 "claim_type": "exclusive_write", 00:16:18.770 "zoned": false, 00:16:18.770 "supported_io_types": { 00:16:18.770 "read": true, 00:16:18.770 "write": true, 00:16:18.770 "unmap": true, 00:16:18.770 "flush": true, 00:16:18.770 "reset": true, 00:16:18.770 "nvme_admin": false, 00:16:18.770 "nvme_io": false, 00:16:18.770 "nvme_io_md": false, 00:16:18.770 "write_zeroes": true, 00:16:18.770 "zcopy": true, 00:16:18.770 "get_zone_info": false, 00:16:18.770 "zone_management": false, 00:16:18.770 "zone_append": false, 00:16:18.770 "compare": false, 00:16:18.770 "compare_and_write": false, 00:16:18.770 "abort": true, 00:16:18.770 "seek_hole": false, 00:16:18.770 "seek_data": false, 00:16:18.770 "copy": true, 00:16:18.770 "nvme_iov_md": false 00:16:18.770 }, 00:16:18.770 "memory_domains": [ 00:16:18.770 { 00:16:18.770 "dma_device_id": "system", 00:16:18.770 "dma_device_type": 1 00:16:18.770 }, 00:16:18.770 { 00:16:18.770 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:18.770 "dma_device_type": 2 00:16:18.770 } 00:16:18.770 ], 00:16:18.770 "driver_specific": { 00:16:18.770 "passthru": { 00:16:18.770 "name": "pt3", 00:16:18.770 "base_bdev_name": "malloc3" 00:16:18.770 } 00:16:18.770 } 00:16:18.770 }' 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:18.770 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:19.028 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:19.028 "name": "pt4", 00:16:19.028 "aliases": [ 00:16:19.028 "00000000-0000-0000-0000-000000000004" 00:16:19.028 ], 00:16:19.028 "product_name": "passthru", 00:16:19.028 "block_size": 512, 00:16:19.028 "num_blocks": 65536, 00:16:19.028 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:19.028 "assigned_rate_limits": { 00:16:19.028 "rw_ios_per_sec": 0, 00:16:19.028 "rw_mbytes_per_sec": 0, 00:16:19.028 "r_mbytes_per_sec": 0, 00:16:19.028 "w_mbytes_per_sec": 0 00:16:19.028 }, 00:16:19.028 "claimed": true, 00:16:19.028 "claim_type": "exclusive_write", 00:16:19.028 "zoned": false, 00:16:19.028 "supported_io_types": { 00:16:19.028 "read": true, 00:16:19.028 "write": true, 00:16:19.028 "unmap": true, 00:16:19.028 "flush": true, 00:16:19.028 "reset": true, 00:16:19.028 "nvme_admin": false, 00:16:19.028 "nvme_io": false, 00:16:19.028 "nvme_io_md": false, 00:16:19.028 "write_zeroes": true, 00:16:19.028 "zcopy": true, 00:16:19.028 "get_zone_info": false, 00:16:19.028 "zone_management": false, 00:16:19.028 "zone_append": false, 00:16:19.028 "compare": false, 00:16:19.028 "compare_and_write": false, 00:16:19.028 "abort": true, 00:16:19.028 "seek_hole": false, 00:16:19.028 "seek_data": false, 00:16:19.028 "copy": true, 00:16:19.028 "nvme_iov_md": false 00:16:19.028 }, 00:16:19.028 "memory_domains": [ 00:16:19.028 { 00:16:19.028 "dma_device_id": "system", 00:16:19.028 "dma_device_type": 1 00:16:19.028 }, 00:16:19.028 { 00:16:19.028 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:19.028 "dma_device_type": 2 00:16:19.028 } 00:16:19.028 ], 00:16:19.028 "driver_specific": { 00:16:19.028 "passthru": { 00:16:19.028 "name": "pt4", 00:16:19.028 "base_bdev_name": "malloc4" 00:16:19.028 } 00:16:19.028 } 00:16:19.028 }' 00:16:19.028 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:19.028 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:19.028 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:19.028 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:19.028 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:19.028 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:19.028 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:19.028 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:19.028 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:19.028 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:19.028 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:19.028 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:19.028 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:19.028 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:16:19.298 [2024-07-14 21:15:30.691782] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:19.298 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=31de96cc-4226-11ef-aa83-81fbc7dfef58 00:16:19.298 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@435 -- # '[' -z 31de96cc-4226-11ef-aa83-81fbc7dfef58 ']' 00:16:19.298 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:19.601 [2024-07-14 21:15:30.959745] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:19.601 [2024-07-14 21:15:30.959768] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:19.601 [2024-07-14 21:15:30.959807] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:19.601 [2024-07-14 21:15:30.959826] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:19.601 [2024-07-14 21:15:30.959829] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10a87c635900 name raid_bdev1, state offline 00:16:19.601 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:16:19.601 21:15:30 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:19.859 21:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:16:19.859 21:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:16:19.859 21:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:19.859 21:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:20.117 21:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:20.117 21:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:20.376 21:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:20.376 21:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:20.376 21:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:16:20.376 21:15:31 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:20.634 21:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:16:20.634 21:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:16:20.891 21:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:16:20.891 21:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:20.891 21:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@648 -- # local es=0 00:16:20.891 21:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:20.891 21:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.891 21:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.891 21:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.891 21:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.891 21:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.891 21:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:16:20.891 21:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:16:20.891 21:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:16:20.891 21:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2 malloc3 malloc4' -n raid_bdev1 00:16:21.148 [2024-07-14 21:15:32.623797] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:16:21.148 [2024-07-14 21:15:32.624514] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:16:21.148 [2024-07-14 21:15:32.624533] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc3 is claimed 00:16:21.148 [2024-07-14 21:15:32.624557] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc4 is claimed 00:16:21.148 [2024-07-14 21:15:32.624570] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:16:21.148 [2024-07-14 21:15:32.624616] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:16:21.148 [2024-07-14 21:15:32.624627] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc3 00:16:21.149 [2024-07-14 21:15:32.624635] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc4 00:16:21.149 [2024-07-14 21:15:32.624643] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:21.149 [2024-07-14 21:15:32.624647] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10a87c635680 name raid_bdev1, state configuring 00:16:21.149 request: 00:16:21.149 { 00:16:21.149 "name": "raid_bdev1", 00:16:21.149 "raid_level": "raid1", 00:16:21.149 "base_bdevs": [ 00:16:21.149 "malloc1", 00:16:21.149 "malloc2", 00:16:21.149 "malloc3", 00:16:21.149 "malloc4" 00:16:21.149 ], 00:16:21.149 "superblock": false, 00:16:21.149 "method": "bdev_raid_create", 00:16:21.149 "req_id": 1 00:16:21.149 } 00:16:21.149 Got JSON-RPC error response 00:16:21.149 response: 00:16:21.149 { 00:16:21.149 "code": -17, 00:16:21.149 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:16:21.149 } 00:16:21.149 21:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@651 -- # es=1 00:16:21.149 21:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:16:21.149 21:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:16:21.149 21:15:32 bdev_raid.raid_superblock_test -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:16:21.149 21:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.149 21:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:16:21.407 21:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:16:21.407 21:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:16:21.407 21:15:32 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:21.666 [2024-07-14 21:15:33.091819] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:21.666 [2024-07-14 21:15:33.091898] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:21.666 [2024-07-14 21:15:33.091910] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a87c635180 00:16:21.666 [2024-07-14 21:15:33.091917] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:21.666 [2024-07-14 21:15:33.092672] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:21.666 [2024-07-14 21:15:33.092698] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:21.666 [2024-07-14 21:15:33.092737] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:21.666 [2024-07-14 21:15:33.092750] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:21.666 pt1 00:16:21.666 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:21.666 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:21.666 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:21.666 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:21.666 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:21.666 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:21.666 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:21.666 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:21.666 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:21.666 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:21.666 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:21.666 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:21.923 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:21.923 "name": "raid_bdev1", 00:16:21.923 "uuid": "31de96cc-4226-11ef-aa83-81fbc7dfef58", 00:16:21.923 "strip_size_kb": 0, 00:16:21.923 "state": "configuring", 00:16:21.923 "raid_level": "raid1", 00:16:21.923 "superblock": true, 00:16:21.923 "num_base_bdevs": 4, 00:16:21.923 "num_base_bdevs_discovered": 1, 00:16:21.923 "num_base_bdevs_operational": 4, 00:16:21.923 "base_bdevs_list": [ 00:16:21.923 { 00:16:21.923 "name": "pt1", 00:16:21.923 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:21.923 "is_configured": true, 00:16:21.923 "data_offset": 2048, 00:16:21.923 "data_size": 63488 00:16:21.923 }, 00:16:21.923 { 00:16:21.923 "name": null, 00:16:21.923 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:21.923 "is_configured": false, 00:16:21.923 "data_offset": 2048, 00:16:21.923 "data_size": 63488 00:16:21.923 }, 00:16:21.923 { 00:16:21.923 "name": null, 00:16:21.923 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:21.923 "is_configured": false, 00:16:21.923 "data_offset": 2048, 00:16:21.923 "data_size": 63488 00:16:21.923 }, 00:16:21.923 { 00:16:21.923 "name": null, 00:16:21.923 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:21.923 "is_configured": false, 00:16:21.923 "data_offset": 2048, 00:16:21.923 "data_size": 63488 00:16:21.923 } 00:16:21.923 ] 00:16:21.923 }' 00:16:21.923 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:21.923 21:15:33 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:22.180 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@469 -- # '[' 4 -gt 2 ']' 00:16:22.180 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@471 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:22.436 [2024-07-14 21:15:33.899840] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:22.436 [2024-07-14 21:15:33.899894] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:22.436 [2024-07-14 21:15:33.899922] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a87c634780 00:16:22.436 [2024-07-14 21:15:33.899929] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:22.436 [2024-07-14 21:15:33.900125] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:22.436 [2024-07-14 21:15:33.900145] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:22.436 [2024-07-14 21:15:33.900169] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:22.436 [2024-07-14 21:15:33.900177] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:22.436 pt2 00:16:22.437 21:15:33 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@472 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:22.694 [2024-07-14 21:15:34.099836] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt2 00:16:22.694 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@473 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 4 00:16:22.694 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:22.694 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:22.694 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:22.694 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:22.694 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:22.694 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:22.694 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:22.694 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:22.694 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:22.694 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:22.694 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:22.952 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:22.952 "name": "raid_bdev1", 00:16:22.952 "uuid": "31de96cc-4226-11ef-aa83-81fbc7dfef58", 00:16:22.952 "strip_size_kb": 0, 00:16:22.952 "state": "configuring", 00:16:22.952 "raid_level": "raid1", 00:16:22.952 "superblock": true, 00:16:22.952 "num_base_bdevs": 4, 00:16:22.952 "num_base_bdevs_discovered": 1, 00:16:22.952 "num_base_bdevs_operational": 4, 00:16:22.952 "base_bdevs_list": [ 00:16:22.952 { 00:16:22.952 "name": "pt1", 00:16:22.952 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:22.952 "is_configured": true, 00:16:22.952 "data_offset": 2048, 00:16:22.952 "data_size": 63488 00:16:22.952 }, 00:16:22.952 { 00:16:22.952 "name": null, 00:16:22.952 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:22.952 "is_configured": false, 00:16:22.952 "data_offset": 2048, 00:16:22.952 "data_size": 63488 00:16:22.952 }, 00:16:22.952 { 00:16:22.952 "name": null, 00:16:22.952 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:22.952 "is_configured": false, 00:16:22.952 "data_offset": 2048, 00:16:22.952 "data_size": 63488 00:16:22.952 }, 00:16:22.952 { 00:16:22.952 "name": null, 00:16:22.952 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:22.952 "is_configured": false, 00:16:22.952 "data_offset": 2048, 00:16:22.952 "data_size": 63488 00:16:22.952 } 00:16:22.952 ] 00:16:22.952 }' 00:16:22.952 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:22.952 21:15:34 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:23.210 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:16:23.210 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:23.210 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:23.468 [2024-07-14 21:15:34.847901] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:23.468 [2024-07-14 21:15:34.847965] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.468 [2024-07-14 21:15:34.847976] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a87c634780 00:16:23.468 [2024-07-14 21:15:34.847983] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.468 [2024-07-14 21:15:34.848136] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.468 [2024-07-14 21:15:34.848148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:23.468 [2024-07-14 21:15:34.848188] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:23.468 [2024-07-14 21:15:34.848197] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:23.468 pt2 00:16:23.468 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:23.468 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:23.468 21:15:34 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:23.726 [2024-07-14 21:15:35.115894] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:23.726 [2024-07-14 21:15:35.115923] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.726 [2024-07-14 21:15:35.115948] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a87c635b80 00:16:23.726 [2024-07-14 21:15:35.115971] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.726 [2024-07-14 21:15:35.116094] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.726 [2024-07-14 21:15:35.116106] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:23.726 [2024-07-14 21:15:35.116128] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:23.726 [2024-07-14 21:15:35.116136] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:23.726 pt3 00:16:23.726 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:23.726 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:23.726 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:23.985 [2024-07-14 21:15:35.383910] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:23.985 [2024-07-14 21:15:35.383949] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:23.985 [2024-07-14 21:15:35.383976] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a87c635900 00:16:23.985 [2024-07-14 21:15:35.383982] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:23.985 [2024-07-14 21:15:35.384112] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:23.985 [2024-07-14 21:15:35.384123] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:23.985 [2024-07-14 21:15:35.384144] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:23.985 [2024-07-14 21:15:35.384168] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:23.985 [2024-07-14 21:15:35.384245] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10a87c634c80 00:16:23.985 [2024-07-14 21:15:35.384250] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:23.985 [2024-07-14 21:15:35.384271] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10a87c697e20 00:16:23.985 [2024-07-14 21:15:35.384328] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10a87c634c80 00:16:23.985 [2024-07-14 21:15:35.384333] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10a87c634c80 00:16:23.985 [2024-07-14 21:15:35.384356] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:23.985 pt4 00:16:23.985 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:16:23.985 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:16:23.985 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:23.985 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:23.985 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:23.985 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:23.985 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:23.985 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:23.985 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:23.985 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:23.985 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:23.985 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:23.985 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:23.985 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:24.243 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:24.243 "name": "raid_bdev1", 00:16:24.243 "uuid": "31de96cc-4226-11ef-aa83-81fbc7dfef58", 00:16:24.243 "strip_size_kb": 0, 00:16:24.243 "state": "online", 00:16:24.243 "raid_level": "raid1", 00:16:24.243 "superblock": true, 00:16:24.243 "num_base_bdevs": 4, 00:16:24.243 "num_base_bdevs_discovered": 4, 00:16:24.243 "num_base_bdevs_operational": 4, 00:16:24.243 "base_bdevs_list": [ 00:16:24.243 { 00:16:24.243 "name": "pt1", 00:16:24.243 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:24.243 "is_configured": true, 00:16:24.243 "data_offset": 2048, 00:16:24.243 "data_size": 63488 00:16:24.243 }, 00:16:24.243 { 00:16:24.243 "name": "pt2", 00:16:24.243 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.243 "is_configured": true, 00:16:24.243 "data_offset": 2048, 00:16:24.243 "data_size": 63488 00:16:24.243 }, 00:16:24.243 { 00:16:24.243 "name": "pt3", 00:16:24.243 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:24.243 "is_configured": true, 00:16:24.243 "data_offset": 2048, 00:16:24.243 "data_size": 63488 00:16:24.243 }, 00:16:24.243 { 00:16:24.243 "name": "pt4", 00:16:24.243 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:24.243 "is_configured": true, 00:16:24.243 "data_offset": 2048, 00:16:24.243 "data_size": 63488 00:16:24.243 } 00:16:24.243 ] 00:16:24.243 }' 00:16:24.243 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:24.243 21:15:35 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:24.502 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:16:24.502 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:24.502 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:24.502 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:24.502 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:24.502 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@198 -- # local name 00:16:24.502 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:24.502 21:15:35 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:24.760 [2024-07-14 21:15:36.075987] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:24.761 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:24.761 "name": "raid_bdev1", 00:16:24.761 "aliases": [ 00:16:24.761 "31de96cc-4226-11ef-aa83-81fbc7dfef58" 00:16:24.761 ], 00:16:24.761 "product_name": "Raid Volume", 00:16:24.761 "block_size": 512, 00:16:24.761 "num_blocks": 63488, 00:16:24.761 "uuid": "31de96cc-4226-11ef-aa83-81fbc7dfef58", 00:16:24.761 "assigned_rate_limits": { 00:16:24.761 "rw_ios_per_sec": 0, 00:16:24.761 "rw_mbytes_per_sec": 0, 00:16:24.761 "r_mbytes_per_sec": 0, 00:16:24.761 "w_mbytes_per_sec": 0 00:16:24.761 }, 00:16:24.761 "claimed": false, 00:16:24.761 "zoned": false, 00:16:24.761 "supported_io_types": { 00:16:24.761 "read": true, 00:16:24.761 "write": true, 00:16:24.761 "unmap": false, 00:16:24.761 "flush": false, 00:16:24.761 "reset": true, 00:16:24.761 "nvme_admin": false, 00:16:24.761 "nvme_io": false, 00:16:24.761 "nvme_io_md": false, 00:16:24.761 "write_zeroes": true, 00:16:24.761 "zcopy": false, 00:16:24.761 "get_zone_info": false, 00:16:24.761 "zone_management": false, 00:16:24.761 "zone_append": false, 00:16:24.761 "compare": false, 00:16:24.761 "compare_and_write": false, 00:16:24.761 "abort": false, 00:16:24.761 "seek_hole": false, 00:16:24.761 "seek_data": false, 00:16:24.761 "copy": false, 00:16:24.761 "nvme_iov_md": false 00:16:24.761 }, 00:16:24.761 "memory_domains": [ 00:16:24.761 { 00:16:24.761 "dma_device_id": "system", 00:16:24.761 "dma_device_type": 1 00:16:24.761 }, 00:16:24.761 { 00:16:24.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.761 "dma_device_type": 2 00:16:24.761 }, 00:16:24.761 { 00:16:24.761 "dma_device_id": "system", 00:16:24.761 "dma_device_type": 1 00:16:24.761 }, 00:16:24.761 { 00:16:24.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.761 "dma_device_type": 2 00:16:24.761 }, 00:16:24.761 { 00:16:24.761 "dma_device_id": "system", 00:16:24.761 "dma_device_type": 1 00:16:24.761 }, 00:16:24.761 { 00:16:24.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.761 "dma_device_type": 2 00:16:24.761 }, 00:16:24.761 { 00:16:24.761 "dma_device_id": "system", 00:16:24.761 "dma_device_type": 1 00:16:24.761 }, 00:16:24.761 { 00:16:24.761 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:24.761 "dma_device_type": 2 00:16:24.761 } 00:16:24.761 ], 00:16:24.761 "driver_specific": { 00:16:24.761 "raid": { 00:16:24.761 "uuid": "31de96cc-4226-11ef-aa83-81fbc7dfef58", 00:16:24.761 "strip_size_kb": 0, 00:16:24.761 "state": "online", 00:16:24.761 "raid_level": "raid1", 00:16:24.761 "superblock": true, 00:16:24.761 "num_base_bdevs": 4, 00:16:24.761 "num_base_bdevs_discovered": 4, 00:16:24.761 "num_base_bdevs_operational": 4, 00:16:24.761 "base_bdevs_list": [ 00:16:24.761 { 00:16:24.761 "name": "pt1", 00:16:24.761 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:24.761 "is_configured": true, 00:16:24.761 "data_offset": 2048, 00:16:24.761 "data_size": 63488 00:16:24.761 }, 00:16:24.761 { 00:16:24.761 "name": "pt2", 00:16:24.761 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:24.761 "is_configured": true, 00:16:24.761 "data_offset": 2048, 00:16:24.761 "data_size": 63488 00:16:24.761 }, 00:16:24.761 { 00:16:24.761 "name": "pt3", 00:16:24.761 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:24.761 "is_configured": true, 00:16:24.761 "data_offset": 2048, 00:16:24.761 "data_size": 63488 00:16:24.761 }, 00:16:24.761 { 00:16:24.761 "name": "pt4", 00:16:24.761 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:24.761 "is_configured": true, 00:16:24.761 "data_offset": 2048, 00:16:24.761 "data_size": 63488 00:16:24.761 } 00:16:24.761 ] 00:16:24.761 } 00:16:24.761 } 00:16:24.761 }' 00:16:24.761 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:24.761 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:24.761 pt2 00:16:24.761 pt3 00:16:24.761 pt4' 00:16:24.761 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:24.761 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:24.761 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:25.019 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:25.019 "name": "pt1", 00:16:25.019 "aliases": [ 00:16:25.019 "00000000-0000-0000-0000-000000000001" 00:16:25.019 ], 00:16:25.019 "product_name": "passthru", 00:16:25.019 "block_size": 512, 00:16:25.019 "num_blocks": 65536, 00:16:25.019 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:25.019 "assigned_rate_limits": { 00:16:25.019 "rw_ios_per_sec": 0, 00:16:25.019 "rw_mbytes_per_sec": 0, 00:16:25.019 "r_mbytes_per_sec": 0, 00:16:25.019 "w_mbytes_per_sec": 0 00:16:25.019 }, 00:16:25.019 "claimed": true, 00:16:25.019 "claim_type": "exclusive_write", 00:16:25.019 "zoned": false, 00:16:25.019 "supported_io_types": { 00:16:25.019 "read": true, 00:16:25.019 "write": true, 00:16:25.019 "unmap": true, 00:16:25.019 "flush": true, 00:16:25.019 "reset": true, 00:16:25.019 "nvme_admin": false, 00:16:25.019 "nvme_io": false, 00:16:25.019 "nvme_io_md": false, 00:16:25.019 "write_zeroes": true, 00:16:25.019 "zcopy": true, 00:16:25.019 "get_zone_info": false, 00:16:25.019 "zone_management": false, 00:16:25.019 "zone_append": false, 00:16:25.019 "compare": false, 00:16:25.019 "compare_and_write": false, 00:16:25.019 "abort": true, 00:16:25.019 "seek_hole": false, 00:16:25.019 "seek_data": false, 00:16:25.019 "copy": true, 00:16:25.019 "nvme_iov_md": false 00:16:25.019 }, 00:16:25.019 "memory_domains": [ 00:16:25.019 { 00:16:25.019 "dma_device_id": "system", 00:16:25.019 "dma_device_type": 1 00:16:25.019 }, 00:16:25.019 { 00:16:25.019 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.019 "dma_device_type": 2 00:16:25.019 } 00:16:25.020 ], 00:16:25.020 "driver_specific": { 00:16:25.020 "passthru": { 00:16:25.020 "name": "pt1", 00:16:25.020 "base_bdev_name": "malloc1" 00:16:25.020 } 00:16:25.020 } 00:16:25.020 }' 00:16:25.020 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.020 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.020 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:25.020 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.020 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.020 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:25.020 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.020 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.020 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:25.020 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.020 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.020 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.020 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:25.020 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:25.020 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:25.278 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:25.278 "name": "pt2", 00:16:25.278 "aliases": [ 00:16:25.278 "00000000-0000-0000-0000-000000000002" 00:16:25.278 ], 00:16:25.278 "product_name": "passthru", 00:16:25.278 "block_size": 512, 00:16:25.278 "num_blocks": 65536, 00:16:25.278 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:25.278 "assigned_rate_limits": { 00:16:25.278 "rw_ios_per_sec": 0, 00:16:25.278 "rw_mbytes_per_sec": 0, 00:16:25.278 "r_mbytes_per_sec": 0, 00:16:25.278 "w_mbytes_per_sec": 0 00:16:25.278 }, 00:16:25.278 "claimed": true, 00:16:25.278 "claim_type": "exclusive_write", 00:16:25.278 "zoned": false, 00:16:25.278 "supported_io_types": { 00:16:25.278 "read": true, 00:16:25.278 "write": true, 00:16:25.278 "unmap": true, 00:16:25.278 "flush": true, 00:16:25.278 "reset": true, 00:16:25.278 "nvme_admin": false, 00:16:25.278 "nvme_io": false, 00:16:25.278 "nvme_io_md": false, 00:16:25.278 "write_zeroes": true, 00:16:25.279 "zcopy": true, 00:16:25.279 "get_zone_info": false, 00:16:25.279 "zone_management": false, 00:16:25.279 "zone_append": false, 00:16:25.279 "compare": false, 00:16:25.279 "compare_and_write": false, 00:16:25.279 "abort": true, 00:16:25.279 "seek_hole": false, 00:16:25.279 "seek_data": false, 00:16:25.279 "copy": true, 00:16:25.279 "nvme_iov_md": false 00:16:25.279 }, 00:16:25.279 "memory_domains": [ 00:16:25.279 { 00:16:25.279 "dma_device_id": "system", 00:16:25.279 "dma_device_type": 1 00:16:25.279 }, 00:16:25.279 { 00:16:25.279 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.279 "dma_device_type": 2 00:16:25.279 } 00:16:25.279 ], 00:16:25.279 "driver_specific": { 00:16:25.279 "passthru": { 00:16:25.279 "name": "pt2", 00:16:25.279 "base_bdev_name": "malloc2" 00:16:25.279 } 00:16:25.279 } 00:16:25.279 }' 00:16:25.279 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.279 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.279 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:25.279 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.279 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.279 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:25.279 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.279 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.279 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:25.279 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.279 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.279 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.279 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:25.279 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt3 00:16:25.279 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:25.537 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:25.537 "name": "pt3", 00:16:25.537 "aliases": [ 00:16:25.537 "00000000-0000-0000-0000-000000000003" 00:16:25.537 ], 00:16:25.537 "product_name": "passthru", 00:16:25.537 "block_size": 512, 00:16:25.537 "num_blocks": 65536, 00:16:25.537 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:25.537 "assigned_rate_limits": { 00:16:25.537 "rw_ios_per_sec": 0, 00:16:25.537 "rw_mbytes_per_sec": 0, 00:16:25.537 "r_mbytes_per_sec": 0, 00:16:25.537 "w_mbytes_per_sec": 0 00:16:25.537 }, 00:16:25.537 "claimed": true, 00:16:25.537 "claim_type": "exclusive_write", 00:16:25.537 "zoned": false, 00:16:25.537 "supported_io_types": { 00:16:25.537 "read": true, 00:16:25.537 "write": true, 00:16:25.537 "unmap": true, 00:16:25.537 "flush": true, 00:16:25.537 "reset": true, 00:16:25.537 "nvme_admin": false, 00:16:25.537 "nvme_io": false, 00:16:25.537 "nvme_io_md": false, 00:16:25.537 "write_zeroes": true, 00:16:25.537 "zcopy": true, 00:16:25.537 "get_zone_info": false, 00:16:25.537 "zone_management": false, 00:16:25.537 "zone_append": false, 00:16:25.537 "compare": false, 00:16:25.537 "compare_and_write": false, 00:16:25.537 "abort": true, 00:16:25.537 "seek_hole": false, 00:16:25.537 "seek_data": false, 00:16:25.537 "copy": true, 00:16:25.537 "nvme_iov_md": false 00:16:25.537 }, 00:16:25.537 "memory_domains": [ 00:16:25.537 { 00:16:25.537 "dma_device_id": "system", 00:16:25.537 "dma_device_type": 1 00:16:25.537 }, 00:16:25.537 { 00:16:25.537 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.537 "dma_device_type": 2 00:16:25.537 } 00:16:25.537 ], 00:16:25.537 "driver_specific": { 00:16:25.537 "passthru": { 00:16:25.537 "name": "pt3", 00:16:25.537 "base_bdev_name": "malloc3" 00:16:25.537 } 00:16:25.537 } 00:16:25.537 }' 00:16:25.537 21:15:36 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.537 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.537 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:25.537 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.537 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.537 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:25.537 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.537 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.537 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:25.537 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.537 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.537 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.537 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:25.537 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt4 00:16:25.537 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:25.796 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:25.796 "name": "pt4", 00:16:25.796 "aliases": [ 00:16:25.796 "00000000-0000-0000-0000-000000000004" 00:16:25.796 ], 00:16:25.796 "product_name": "passthru", 00:16:25.796 "block_size": 512, 00:16:25.796 "num_blocks": 65536, 00:16:25.796 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:25.796 "assigned_rate_limits": { 00:16:25.796 "rw_ios_per_sec": 0, 00:16:25.796 "rw_mbytes_per_sec": 0, 00:16:25.796 "r_mbytes_per_sec": 0, 00:16:25.796 "w_mbytes_per_sec": 0 00:16:25.796 }, 00:16:25.796 "claimed": true, 00:16:25.796 "claim_type": "exclusive_write", 00:16:25.796 "zoned": false, 00:16:25.796 "supported_io_types": { 00:16:25.796 "read": true, 00:16:25.796 "write": true, 00:16:25.796 "unmap": true, 00:16:25.796 "flush": true, 00:16:25.796 "reset": true, 00:16:25.796 "nvme_admin": false, 00:16:25.796 "nvme_io": false, 00:16:25.796 "nvme_io_md": false, 00:16:25.796 "write_zeroes": true, 00:16:25.796 "zcopy": true, 00:16:25.796 "get_zone_info": false, 00:16:25.796 "zone_management": false, 00:16:25.796 "zone_append": false, 00:16:25.796 "compare": false, 00:16:25.796 "compare_and_write": false, 00:16:25.796 "abort": true, 00:16:25.796 "seek_hole": false, 00:16:25.796 "seek_data": false, 00:16:25.796 "copy": true, 00:16:25.796 "nvme_iov_md": false 00:16:25.796 }, 00:16:25.796 "memory_domains": [ 00:16:25.796 { 00:16:25.796 "dma_device_id": "system", 00:16:25.796 "dma_device_type": 1 00:16:25.796 }, 00:16:25.796 { 00:16:25.796 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:25.796 "dma_device_type": 2 00:16:25.796 } 00:16:25.796 ], 00:16:25.796 "driver_specific": { 00:16:25.796 "passthru": { 00:16:25.796 "name": "pt4", 00:16:25.796 "base_bdev_name": "malloc4" 00:16:25.796 } 00:16:25.796 } 00:16:25.796 }' 00:16:25.796 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.796 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:25.796 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@205 -- # [[ 512 == 512 ]] 00:16:25.796 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.796 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:25.796 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:25.796 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.796 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:25.796 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:25.796 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.796 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:25.796 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:25.796 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:25.796 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:16:26.054 [2024-07-14 21:15:37.568072] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:26.054 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@486 -- # '[' 31de96cc-4226-11ef-aa83-81fbc7dfef58 '!=' 31de96cc-4226-11ef-aa83-81fbc7dfef58 ']' 00:16:26.054 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:16:26.054 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:26.054 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:26.054 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:16:26.312 [2024-07-14 21:15:37.828080] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:16:26.312 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:26.312 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:26.312 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:26.312 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:26.312 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:26.312 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:26.312 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:26.312 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:26.312 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:26.312 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:26.312 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:26.312 21:15:37 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:26.570 21:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:26.570 "name": "raid_bdev1", 00:16:26.570 "uuid": "31de96cc-4226-11ef-aa83-81fbc7dfef58", 00:16:26.570 "strip_size_kb": 0, 00:16:26.570 "state": "online", 00:16:26.570 "raid_level": "raid1", 00:16:26.570 "superblock": true, 00:16:26.570 "num_base_bdevs": 4, 00:16:26.570 "num_base_bdevs_discovered": 3, 00:16:26.570 "num_base_bdevs_operational": 3, 00:16:26.570 "base_bdevs_list": [ 00:16:26.570 { 00:16:26.570 "name": null, 00:16:26.570 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:26.570 "is_configured": false, 00:16:26.570 "data_offset": 2048, 00:16:26.570 "data_size": 63488 00:16:26.570 }, 00:16:26.570 { 00:16:26.570 "name": "pt2", 00:16:26.570 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:26.570 "is_configured": true, 00:16:26.570 "data_offset": 2048, 00:16:26.570 "data_size": 63488 00:16:26.570 }, 00:16:26.570 { 00:16:26.570 "name": "pt3", 00:16:26.570 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:26.570 "is_configured": true, 00:16:26.570 "data_offset": 2048, 00:16:26.570 "data_size": 63488 00:16:26.570 }, 00:16:26.570 { 00:16:26.570 "name": "pt4", 00:16:26.570 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:26.570 "is_configured": true, 00:16:26.570 "data_offset": 2048, 00:16:26.570 "data_size": 63488 00:16:26.570 } 00:16:26.570 ] 00:16:26.570 }' 00:16:26.570 21:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:26.570 21:15:38 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:27.134 21:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:27.134 [2024-07-14 21:15:38.600108] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:27.134 [2024-07-14 21:15:38.600126] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:27.134 [2024-07-14 21:15:38.600165] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:27.135 [2024-07-14 21:15:38.600182] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:27.135 [2024-07-14 21:15:38.600186] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10a87c634c80 name raid_bdev1, state offline 00:16:27.135 21:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:27.135 21:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:16:27.391 21:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:16:27.391 21:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:16:27.391 21:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:16:27.391 21:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:27.391 21:15:38 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:16:27.649 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:27.649 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:27.649 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt3 00:16:27.907 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:27.907 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:27.907 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:28.164 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:16:28.164 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:16:28.164 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:16:28.164 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:28.164 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:28.422 [2024-07-14 21:15:39.788124] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:28.422 [2024-07-14 21:15:39.788182] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:28.422 [2024-07-14 21:15:39.788208] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a87c635900 00:16:28.422 [2024-07-14 21:15:39.788216] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:28.422 [2024-07-14 21:15:39.789018] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:28.422 [2024-07-14 21:15:39.789057] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:28.422 [2024-07-14 21:15:39.789096] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:16:28.422 [2024-07-14 21:15:39.789108] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:28.422 pt2 00:16:28.422 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:28.422 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:28.422 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:28.422 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:28.422 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:28.422 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:28.422 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:28.422 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:28.422 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:28.422 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:28.422 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:28.422 21:15:39 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:28.680 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:28.680 "name": "raid_bdev1", 00:16:28.680 "uuid": "31de96cc-4226-11ef-aa83-81fbc7dfef58", 00:16:28.680 "strip_size_kb": 0, 00:16:28.680 "state": "configuring", 00:16:28.680 "raid_level": "raid1", 00:16:28.680 "superblock": true, 00:16:28.680 "num_base_bdevs": 4, 00:16:28.680 "num_base_bdevs_discovered": 1, 00:16:28.680 "num_base_bdevs_operational": 3, 00:16:28.680 "base_bdevs_list": [ 00:16:28.680 { 00:16:28.680 "name": null, 00:16:28.680 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:28.680 "is_configured": false, 00:16:28.680 "data_offset": 2048, 00:16:28.680 "data_size": 63488 00:16:28.680 }, 00:16:28.680 { 00:16:28.680 "name": "pt2", 00:16:28.680 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:28.680 "is_configured": true, 00:16:28.680 "data_offset": 2048, 00:16:28.680 "data_size": 63488 00:16:28.680 }, 00:16:28.680 { 00:16:28.680 "name": null, 00:16:28.680 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:28.680 "is_configured": false, 00:16:28.680 "data_offset": 2048, 00:16:28.680 "data_size": 63488 00:16:28.680 }, 00:16:28.680 { 00:16:28.680 "name": null, 00:16:28.680 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:28.680 "is_configured": false, 00:16:28.680 "data_offset": 2048, 00:16:28.680 "data_size": 63488 00:16:28.680 } 00:16:28.680 ] 00:16:28.680 }' 00:16:28.680 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:28.680 21:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:28.938 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:16:28.938 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:28.938 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@511 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc3 -p pt3 -u 00000000-0000-0000-0000-000000000003 00:16:29.196 [2024-07-14 21:15:40.600143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc3 00:16:29.196 [2024-07-14 21:15:40.600202] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.196 [2024-07-14 21:15:40.600230] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a87c635680 00:16:29.196 [2024-07-14 21:15:40.600238] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.196 [2024-07-14 21:15:40.600367] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.196 [2024-07-14 21:15:40.600385] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt3 00:16:29.196 [2024-07-14 21:15:40.600409] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt3 00:16:29.196 [2024-07-14 21:15:40.600418] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:29.196 pt3 00:16:29.196 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@514 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:29.196 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:29.196 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:29.196 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:29.196 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:29.196 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:29.196 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.196 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.196 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.196 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.196 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.196 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:29.453 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:29.453 "name": "raid_bdev1", 00:16:29.453 "uuid": "31de96cc-4226-11ef-aa83-81fbc7dfef58", 00:16:29.453 "strip_size_kb": 0, 00:16:29.453 "state": "configuring", 00:16:29.453 "raid_level": "raid1", 00:16:29.453 "superblock": true, 00:16:29.453 "num_base_bdevs": 4, 00:16:29.453 "num_base_bdevs_discovered": 2, 00:16:29.453 "num_base_bdevs_operational": 3, 00:16:29.453 "base_bdevs_list": [ 00:16:29.453 { 00:16:29.453 "name": null, 00:16:29.453 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:29.453 "is_configured": false, 00:16:29.453 "data_offset": 2048, 00:16:29.453 "data_size": 63488 00:16:29.453 }, 00:16:29.453 { 00:16:29.453 "name": "pt2", 00:16:29.453 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:29.453 "is_configured": true, 00:16:29.453 "data_offset": 2048, 00:16:29.453 "data_size": 63488 00:16:29.453 }, 00:16:29.453 { 00:16:29.453 "name": "pt3", 00:16:29.453 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:29.453 "is_configured": true, 00:16:29.453 "data_offset": 2048, 00:16:29.453 "data_size": 63488 00:16:29.453 }, 00:16:29.453 { 00:16:29.453 "name": null, 00:16:29.453 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:29.453 "is_configured": false, 00:16:29.453 "data_offset": 2048, 00:16:29.453 "data_size": 63488 00:16:29.453 } 00:16:29.453 ] 00:16:29.453 }' 00:16:29.453 21:15:40 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:29.453 21:15:40 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:29.710 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i++ )) 00:16:29.710 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:16:29.710 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@518 -- # i=3 00:16:29.710 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:29.968 [2024-07-14 21:15:41.340173] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:29.968 [2024-07-14 21:15:41.340230] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:29.968 [2024-07-14 21:15:41.340258] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a87c634c80 00:16:29.968 [2024-07-14 21:15:41.340265] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:29.968 [2024-07-14 21:15:41.340394] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:29.968 [2024-07-14 21:15:41.340405] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:29.968 [2024-07-14 21:15:41.340445] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:29.968 [2024-07-14 21:15:41.340454] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:29.968 [2024-07-14 21:15:41.340515] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10a87c634780 00:16:29.968 [2024-07-14 21:15:41.340520] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:29.968 [2024-07-14 21:15:41.340592] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10a87c697e20 00:16:29.968 [2024-07-14 21:15:41.340645] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10a87c634780 00:16:29.968 [2024-07-14 21:15:41.340649] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10a87c634780 00:16:29.968 [2024-07-14 21:15:41.340671] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:29.968 pt4 00:16:29.968 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:29.968 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:29.968 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:29.968 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:29.968 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:29.968 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:29.968 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:29.968 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:29.968 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:29.968 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:29.968 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:29.969 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:30.226 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:30.226 "name": "raid_bdev1", 00:16:30.226 "uuid": "31de96cc-4226-11ef-aa83-81fbc7dfef58", 00:16:30.226 "strip_size_kb": 0, 00:16:30.226 "state": "online", 00:16:30.226 "raid_level": "raid1", 00:16:30.226 "superblock": true, 00:16:30.226 "num_base_bdevs": 4, 00:16:30.226 "num_base_bdevs_discovered": 3, 00:16:30.226 "num_base_bdevs_operational": 3, 00:16:30.226 "base_bdevs_list": [ 00:16:30.226 { 00:16:30.226 "name": null, 00:16:30.226 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:30.226 "is_configured": false, 00:16:30.226 "data_offset": 2048, 00:16:30.226 "data_size": 63488 00:16:30.226 }, 00:16:30.226 { 00:16:30.226 "name": "pt2", 00:16:30.226 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:30.226 "is_configured": true, 00:16:30.226 "data_offset": 2048, 00:16:30.226 "data_size": 63488 00:16:30.226 }, 00:16:30.226 { 00:16:30.226 "name": "pt3", 00:16:30.226 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:30.226 "is_configured": true, 00:16:30.226 "data_offset": 2048, 00:16:30.226 "data_size": 63488 00:16:30.226 }, 00:16:30.226 { 00:16:30.226 "name": "pt4", 00:16:30.226 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:30.226 "is_configured": true, 00:16:30.227 "data_offset": 2048, 00:16:30.227 "data_size": 63488 00:16:30.227 } 00:16:30.227 ] 00:16:30.227 }' 00:16:30.227 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:30.227 21:15:41 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:30.485 21:15:41 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:30.744 [2024-07-14 21:15:42.096213] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:30.744 [2024-07-14 21:15:42.096230] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:30.744 [2024-07-14 21:15:42.096265] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:30.744 [2024-07-14 21:15:42.096282] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:30.744 [2024-07-14 21:15:42.096286] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10a87c634780 name raid_bdev1, state offline 00:16:30.744 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:30.744 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:16:31.003 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:16:31.003 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:16:31.003 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@531 -- # '[' 4 -gt 2 ']' 00:16:31.003 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@533 -- # i=3 00:16:31.003 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@534 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt4 00:16:31.003 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:31.262 [2024-07-14 21:15:42.712249] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:31.262 [2024-07-14 21:15:42.712307] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:31.262 [2024-07-14 21:15:42.712335] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a87c634c80 00:16:31.262 [2024-07-14 21:15:42.712342] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:31.262 [2024-07-14 21:15:42.713131] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:31.262 [2024-07-14 21:15:42.713171] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:31.262 [2024-07-14 21:15:42.713226] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:16:31.262 [2024-07-14 21:15:42.713262] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:31.262 [2024-07-14 21:15:42.713336] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:16:31.262 [2024-07-14 21:15:42.713340] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:31.262 [2024-07-14 21:15:42.713345] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10a87c634780 name raid_bdev1, state configuring 00:16:31.262 [2024-07-14 21:15:42.713353] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:31.262 [2024-07-14 21:15:42.713372] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt3 is claimed 00:16:31.262 pt1 00:16:31.262 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@541 -- # '[' 4 -gt 2 ']' 00:16:31.262 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@544 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 3 00:16:31.262 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:31.262 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:31.262 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:31.262 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:31.262 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:31.262 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:31.262 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:31.262 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:31.262 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:31.262 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:31.262 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:31.522 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:31.522 "name": "raid_bdev1", 00:16:31.522 "uuid": "31de96cc-4226-11ef-aa83-81fbc7dfef58", 00:16:31.522 "strip_size_kb": 0, 00:16:31.522 "state": "configuring", 00:16:31.522 "raid_level": "raid1", 00:16:31.522 "superblock": true, 00:16:31.522 "num_base_bdevs": 4, 00:16:31.522 "num_base_bdevs_discovered": 2, 00:16:31.522 "num_base_bdevs_operational": 3, 00:16:31.522 "base_bdevs_list": [ 00:16:31.522 { 00:16:31.522 "name": null, 00:16:31.522 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:31.522 "is_configured": false, 00:16:31.522 "data_offset": 2048, 00:16:31.522 "data_size": 63488 00:16:31.522 }, 00:16:31.522 { 00:16:31.522 "name": "pt2", 00:16:31.522 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:31.522 "is_configured": true, 00:16:31.522 "data_offset": 2048, 00:16:31.522 "data_size": 63488 00:16:31.522 }, 00:16:31.522 { 00:16:31.522 "name": "pt3", 00:16:31.522 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:31.522 "is_configured": true, 00:16:31.522 "data_offset": 2048, 00:16:31.522 "data_size": 63488 00:16:31.522 }, 00:16:31.522 { 00:16:31.522 "name": null, 00:16:31.522 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:31.522 "is_configured": false, 00:16:31.522 "data_offset": 2048, 00:16:31.522 "data_size": 63488 00:16:31.522 } 00:16:31.522 ] 00:16:31.522 }' 00:16:31.522 21:15:42 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:31.522 21:15:42 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:31.781 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs configuring 00:16:31.781 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:32.038 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@545 -- # [[ false == \f\a\l\s\e ]] 00:16:32.038 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@548 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc4 -p pt4 -u 00000000-0000-0000-0000-000000000004 00:16:32.296 [2024-07-14 21:15:43.648299] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc4 00:16:32.296 [2024-07-14 21:15:43.648365] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:32.296 [2024-07-14 21:15:43.648392] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10a87c635180 00:16:32.296 [2024-07-14 21:15:43.648399] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:32.296 [2024-07-14 21:15:43.648513] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:32.296 [2024-07-14 21:15:43.648555] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt4 00:16:32.296 [2024-07-14 21:15:43.648593] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt4 00:16:32.296 [2024-07-14 21:15:43.648602] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt4 is claimed 00:16:32.296 [2024-07-14 21:15:43.648631] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10a87c634780 00:16:32.296 [2024-07-14 21:15:43.648636] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:32.296 [2024-07-14 21:15:43.648656] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10a87c697e20 00:16:32.296 [2024-07-14 21:15:43.648760] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10a87c634780 00:16:32.296 [2024-07-14 21:15:43.648765] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10a87c634780 00:16:32.296 [2024-07-14 21:15:43.648787] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:32.296 pt4 00:16:32.296 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:32.296 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:32.296 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:32.296 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:32.296 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:32.296 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:32.296 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:32.296 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:32.296 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:32.296 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:32.296 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:32.296 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:32.553 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:32.553 "name": "raid_bdev1", 00:16:32.553 "uuid": "31de96cc-4226-11ef-aa83-81fbc7dfef58", 00:16:32.553 "strip_size_kb": 0, 00:16:32.553 "state": "online", 00:16:32.553 "raid_level": "raid1", 00:16:32.553 "superblock": true, 00:16:32.553 "num_base_bdevs": 4, 00:16:32.553 "num_base_bdevs_discovered": 3, 00:16:32.553 "num_base_bdevs_operational": 3, 00:16:32.553 "base_bdevs_list": [ 00:16:32.553 { 00:16:32.553 "name": null, 00:16:32.553 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:32.553 "is_configured": false, 00:16:32.553 "data_offset": 2048, 00:16:32.553 "data_size": 63488 00:16:32.553 }, 00:16:32.553 { 00:16:32.553 "name": "pt2", 00:16:32.553 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:32.553 "is_configured": true, 00:16:32.553 "data_offset": 2048, 00:16:32.553 "data_size": 63488 00:16:32.553 }, 00:16:32.553 { 00:16:32.553 "name": "pt3", 00:16:32.553 "uuid": "00000000-0000-0000-0000-000000000003", 00:16:32.553 "is_configured": true, 00:16:32.553 "data_offset": 2048, 00:16:32.553 "data_size": 63488 00:16:32.553 }, 00:16:32.553 { 00:16:32.553 "name": "pt4", 00:16:32.553 "uuid": "00000000-0000-0000-0000-000000000004", 00:16:32.553 "is_configured": true, 00:16:32.553 "data_offset": 2048, 00:16:32.553 "data_size": 63488 00:16:32.553 } 00:16:32.553 ] 00:16:32.553 }' 00:16:32.553 21:15:43 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:32.553 21:15:43 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:32.810 21:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:16:32.810 21:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:16:33.069 21:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:16:33.069 21:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:33.069 21:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:16:33.328 [2024-07-14 21:15:44.616404] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@557 -- # '[' 31de96cc-4226-11ef-aa83-81fbc7dfef58 '!=' 31de96cc-4226-11ef-aa83-81fbc7dfef58 ']' 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@562 -- # killprocess 64495 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@948 -- # '[' -z 64495 ']' 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@952 -- # kill -0 64495 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # uname 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # ps -c -o command 64495 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # tail -1 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:16:33.328 killing process with pid 64495 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 64495' 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@967 -- # kill 64495 00:16:33.328 [2024-07-14 21:15:44.641839] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:33.328 [2024-07-14 21:15:44.641874] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:33.328 [2024-07-14 21:15:44.641891] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:33.328 [2024-07-14 21:15:44.641894] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10a87c634780 name raid_bdev1, state offline 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@972 -- # wait 64495 00:16:33.328 [2024-07-14 21:15:44.666193] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- bdev/bdev_raid.sh@564 -- # return 0 00:16:33.328 00:16:33.328 real 0m19.606s 00:16:33.328 user 0m35.442s 00:16:33.328 sys 0m2.953s 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:33.328 21:15:44 bdev_raid.raid_superblock_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.328 ************************************ 00:16:33.328 END TEST raid_superblock_test 00:16:33.328 ************************************ 00:16:33.589 21:15:44 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:33.589 21:15:44 bdev_raid -- bdev/bdev_raid.sh@870 -- # run_test raid_read_error_test raid_io_error_test raid1 4 read 00:16:33.589 21:15:44 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:33.589 21:15:44 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:33.589 21:15:44 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:33.589 ************************************ 00:16:33.589 START TEST raid_read_error_test 00:16:33.589 ************************************ 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 read 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=read 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.i54DpwUG1N 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=65123 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 65123 /var/tmp/spdk-raid.sock 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@829 -- # '[' -z 65123 ']' 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:33.589 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:33.589 21:15:44 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:33.589 [2024-07-14 21:15:44.906264] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:33.589 [2024-07-14 21:15:44.906521] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:34.161 EAL: TSC is not safe to use in SMP mode 00:16:34.161 EAL: TSC is not invariant 00:16:34.161 [2024-07-14 21:15:45.421890] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:34.161 [2024-07-14 21:15:45.495320] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:34.161 [2024-07-14 21:15:45.497720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:34.161 [2024-07-14 21:15:45.498669] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.161 [2024-07-14 21:15:45.498701] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:34.420 21:15:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:34.420 21:15:45 bdev_raid.raid_read_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:34.420 21:15:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:34.420 21:15:45 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:34.678 BaseBdev1_malloc 00:16:34.678 21:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:34.937 true 00:16:34.937 21:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:35.195 [2024-07-14 21:15:46.569797] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:35.195 [2024-07-14 21:15:46.569858] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.195 [2024-07-14 21:15:46.569898] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2bbb9d834780 00:16:35.195 [2024-07-14 21:15:46.569905] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.195 [2024-07-14 21:15:46.570645] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.195 [2024-07-14 21:15:46.570697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:35.195 BaseBdev1 00:16:35.195 21:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:35.195 21:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:35.454 BaseBdev2_malloc 00:16:35.454 21:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:35.454 true 00:16:35.454 21:15:46 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:35.713 [2024-07-14 21:15:47.189851] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:35.713 [2024-07-14 21:15:47.189891] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:35.713 [2024-07-14 21:15:47.189927] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2bbb9d834c80 00:16:35.713 [2024-07-14 21:15:47.189934] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:35.713 [2024-07-14 21:15:47.190686] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:35.713 [2024-07-14 21:15:47.190726] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:35.713 BaseBdev2 00:16:35.713 21:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:35.713 21:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:35.971 BaseBdev3_malloc 00:16:35.971 21:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:36.228 true 00:16:36.229 21:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:36.486 [2024-07-14 21:15:47.853854] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:36.486 [2024-07-14 21:15:47.853914] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:36.486 [2024-07-14 21:15:47.853950] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2bbb9d835180 00:16:36.486 [2024-07-14 21:15:47.853957] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:36.486 [2024-07-14 21:15:47.854683] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:36.486 [2024-07-14 21:15:47.854723] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:36.486 BaseBdev3 00:16:36.486 21:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:36.486 21:15:47 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:36.744 BaseBdev4_malloc 00:16:36.744 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:36.744 true 00:16:36.744 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:37.002 [2024-07-14 21:15:48.473903] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:37.002 [2024-07-14 21:15:48.473960] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:37.002 [2024-07-14 21:15:48.473999] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2bbb9d835680 00:16:37.002 [2024-07-14 21:15:48.474006] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:37.002 [2024-07-14 21:15:48.474750] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:37.002 [2024-07-14 21:15:48.474789] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:37.002 BaseBdev4 00:16:37.002 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:37.261 [2024-07-14 21:15:48.721918] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:37.261 [2024-07-14 21:15:48.722446] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:37.261 [2024-07-14 21:15:48.722470] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:37.261 [2024-07-14 21:15:48.722485] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:37.261 [2024-07-14 21:15:48.722545] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2bbb9d835900 00:16:37.261 [2024-07-14 21:15:48.722551] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:37.261 [2024-07-14 21:15:48.722582] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2bbb9d8a0e20 00:16:37.261 [2024-07-14 21:15:48.722684] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2bbb9d835900 00:16:37.261 [2024-07-14 21:15:48.722689] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2bbb9d835900 00:16:37.261 [2024-07-14 21:15:48.722715] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:37.261 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:37.261 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:37.261 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:37.261 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:37.261 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:37.261 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:37.261 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:37.261 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:37.261 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:37.261 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:37.261 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:37.261 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:37.519 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:37.519 "name": "raid_bdev1", 00:16:37.519 "uuid": "3df1c1b6-4226-11ef-aa83-81fbc7dfef58", 00:16:37.519 "strip_size_kb": 0, 00:16:37.519 "state": "online", 00:16:37.519 "raid_level": "raid1", 00:16:37.519 "superblock": true, 00:16:37.519 "num_base_bdevs": 4, 00:16:37.519 "num_base_bdevs_discovered": 4, 00:16:37.519 "num_base_bdevs_operational": 4, 00:16:37.519 "base_bdevs_list": [ 00:16:37.519 { 00:16:37.519 "name": "BaseBdev1", 00:16:37.519 "uuid": "a71ac0e2-ef8b-e856-82d9-45c4074da772", 00:16:37.519 "is_configured": true, 00:16:37.519 "data_offset": 2048, 00:16:37.519 "data_size": 63488 00:16:37.519 }, 00:16:37.519 { 00:16:37.519 "name": "BaseBdev2", 00:16:37.519 "uuid": "f102fa0a-4213-2f54-a0af-6879e70de984", 00:16:37.519 "is_configured": true, 00:16:37.519 "data_offset": 2048, 00:16:37.519 "data_size": 63488 00:16:37.519 }, 00:16:37.519 { 00:16:37.519 "name": "BaseBdev3", 00:16:37.519 "uuid": "949b8d81-2176-9851-ab6a-efdac8d47e91", 00:16:37.519 "is_configured": true, 00:16:37.519 "data_offset": 2048, 00:16:37.519 "data_size": 63488 00:16:37.519 }, 00:16:37.519 { 00:16:37.519 "name": "BaseBdev4", 00:16:37.519 "uuid": "43d90cc4-7493-dc57-be8e-e4985f17ceef", 00:16:37.520 "is_configured": true, 00:16:37.520 "data_offset": 2048, 00:16:37.520 "data_size": 63488 00:16:37.520 } 00:16:37.520 ] 00:16:37.520 }' 00:16:37.520 21:15:48 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:37.520 21:15:48 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:37.777 21:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:37.777 21:15:49 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:38.036 [2024-07-14 21:15:49.338105] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2bbb9d8a0ec0 00:16:38.972 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc read failure 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@830 -- # [[ read = \w\r\i\t\e ]] 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@833 -- # expected_num_base_bdevs=4 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:39.230 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:39.488 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:39.488 "name": "raid_bdev1", 00:16:39.488 "uuid": "3df1c1b6-4226-11ef-aa83-81fbc7dfef58", 00:16:39.488 "strip_size_kb": 0, 00:16:39.488 "state": "online", 00:16:39.488 "raid_level": "raid1", 00:16:39.488 "superblock": true, 00:16:39.488 "num_base_bdevs": 4, 00:16:39.488 "num_base_bdevs_discovered": 4, 00:16:39.488 "num_base_bdevs_operational": 4, 00:16:39.488 "base_bdevs_list": [ 00:16:39.488 { 00:16:39.488 "name": "BaseBdev1", 00:16:39.488 "uuid": "a71ac0e2-ef8b-e856-82d9-45c4074da772", 00:16:39.488 "is_configured": true, 00:16:39.488 "data_offset": 2048, 00:16:39.488 "data_size": 63488 00:16:39.488 }, 00:16:39.488 { 00:16:39.488 "name": "BaseBdev2", 00:16:39.488 "uuid": "f102fa0a-4213-2f54-a0af-6879e70de984", 00:16:39.488 "is_configured": true, 00:16:39.488 "data_offset": 2048, 00:16:39.488 "data_size": 63488 00:16:39.488 }, 00:16:39.488 { 00:16:39.488 "name": "BaseBdev3", 00:16:39.488 "uuid": "949b8d81-2176-9851-ab6a-efdac8d47e91", 00:16:39.488 "is_configured": true, 00:16:39.488 "data_offset": 2048, 00:16:39.488 "data_size": 63488 00:16:39.488 }, 00:16:39.488 { 00:16:39.488 "name": "BaseBdev4", 00:16:39.488 "uuid": "43d90cc4-7493-dc57-be8e-e4985f17ceef", 00:16:39.488 "is_configured": true, 00:16:39.488 "data_offset": 2048, 00:16:39.488 "data_size": 63488 00:16:39.488 } 00:16:39.488 ] 00:16:39.488 }' 00:16:39.488 21:15:50 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:39.488 21:15:50 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:39.746 21:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:40.005 [2024-07-14 21:15:51.352261] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:40.005 [2024-07-14 21:15:51.352287] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:40.005 [2024-07-14 21:15:51.352690] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:40.005 [2024-07-14 21:15:51.352716] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:40.005 [2024-07-14 21:15:51.352734] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:40.005 [2024-07-14 21:15:51.352738] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2bbb9d835900 name raid_bdev1, state offline 00:16:40.005 0 00:16:40.005 21:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 65123 00:16:40.005 21:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@948 -- # '[' -z 65123 ']' 00:16:40.005 21:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@952 -- # kill -0 65123 00:16:40.005 21:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # uname 00:16:40.005 21:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:40.005 21:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 65123 00:16:40.005 21:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # tail -1 00:16:40.005 21:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:16:40.005 killing process with pid 65123 00:16:40.005 21:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:16:40.005 21:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65123' 00:16:40.005 21:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@967 -- # kill 65123 00:16:40.005 [2024-07-14 21:15:51.378421] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:40.005 21:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@972 -- # wait 65123 00:16:40.005 [2024-07-14 21:15:51.402113] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:40.264 21:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.i54DpwUG1N 00:16:40.264 21:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:40.264 21:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:40.264 21:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:40.264 21:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:40.264 21:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:40.264 21:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:40.264 21:15:51 bdev_raid.raid_read_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:40.264 00:16:40.264 real 0m6.687s 00:16:40.264 user 0m10.509s 00:16:40.264 sys 0m1.089s 00:16:40.264 21:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:40.264 21:15:51 bdev_raid.raid_read_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.264 ************************************ 00:16:40.264 END TEST raid_read_error_test 00:16:40.264 ************************************ 00:16:40.264 21:15:51 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:40.264 21:15:51 bdev_raid -- bdev/bdev_raid.sh@871 -- # run_test raid_write_error_test raid_io_error_test raid1 4 write 00:16:40.264 21:15:51 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:40.264 21:15:51 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:40.264 21:15:51 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:40.264 ************************************ 00:16:40.264 START TEST raid_write_error_test 00:16:40.264 ************************************ 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1123 -- # raid_io_error_test raid1 4 write 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@788 -- # local raid_level=raid1 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@789 -- # local num_base_bdevs=4 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@790 -- # local error_io_type=write 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i = 1 )) 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev1 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev2 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev3 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # echo BaseBdev4 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i++ )) 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # (( i <= num_base_bdevs )) 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # base_bdevs=('BaseBdev1' 'BaseBdev2' 'BaseBdev3' 'BaseBdev4') 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@791 -- # local base_bdevs 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@792 -- # local raid_bdev_name=raid_bdev1 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@793 -- # local strip_size 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@794 -- # local create_arg 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@795 -- # local bdevperf_log 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@796 -- # local fail_per_s 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@798 -- # '[' raid1 '!=' raid1 ']' 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@802 -- # strip_size=0 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # mktemp -p /raidtest 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@805 -- # bdevperf_log=/raidtest/tmp.n4LdQj3274 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@808 -- # raid_pid=65261 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@809 -- # waitforlisten 65261 /var/tmp/spdk-raid.sock 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@829 -- # '[' -z 65261 ']' 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@807 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 128k -q 1 -z -f -L bdev_raid 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:40.264 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:40.264 21:15:51 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:40.264 [2024-07-14 21:15:51.639333] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:40.264 [2024-07-14 21:15:51.639575] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:40.831 EAL: TSC is not safe to use in SMP mode 00:16:40.831 EAL: TSC is not invariant 00:16:40.831 [2024-07-14 21:15:52.241992] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:40.831 [2024-07-14 21:15:52.342225] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:40.831 [2024-07-14 21:15:52.344877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:40.831 [2024-07-14 21:15:52.345863] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:40.831 [2024-07-14 21:15:52.345876] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:41.397 21:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:41.397 21:15:52 bdev_raid.raid_write_error_test -- common/autotest_common.sh@862 -- # return 0 00:16:41.397 21:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:41.397 21:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev1_malloc 00:16:41.656 BaseBdev1_malloc 00:16:41.656 21:15:52 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev1_malloc 00:16:41.914 true 00:16:41.914 21:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev1_malloc -p BaseBdev1 00:16:42.170 [2024-07-14 21:15:53.529080] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev1_malloc 00:16:42.170 [2024-07-14 21:15:53.529143] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.170 [2024-07-14 21:15:53.529163] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1aad82434780 00:16:42.170 [2024-07-14 21:15:53.529170] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.170 [2024-07-14 21:15:53.529807] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.170 [2024-07-14 21:15:53.529832] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:16:42.170 BaseBdev1 00:16:42.170 21:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:42.170 21:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev2_malloc 00:16:42.427 BaseBdev2_malloc 00:16:42.427 21:15:53 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev2_malloc 00:16:42.685 true 00:16:42.685 21:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev2_malloc -p BaseBdev2 00:16:42.943 [2024-07-14 21:15:54.325373] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev2_malloc 00:16:42.943 [2024-07-14 21:15:54.325458] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:42.943 [2024-07-14 21:15:54.325498] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1aad82434c80 00:16:42.943 [2024-07-14 21:15:54.325506] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:42.943 [2024-07-14 21:15:54.326286] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:42.943 [2024-07-14 21:15:54.326325] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:16:42.943 BaseBdev2 00:16:42.943 21:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:42.943 21:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev3_malloc 00:16:43.201 BaseBdev3_malloc 00:16:43.201 21:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev3_malloc 00:16:43.459 true 00:16:43.459 21:15:54 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev3_malloc -p BaseBdev3 00:16:43.716 [2024-07-14 21:15:55.145698] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev3_malloc 00:16:43.717 [2024-07-14 21:15:55.145758] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:43.717 [2024-07-14 21:15:55.145781] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1aad82435180 00:16:43.717 [2024-07-14 21:15:55.145789] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:43.717 [2024-07-14 21:15:55.146406] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:43.717 [2024-07-14 21:15:55.146431] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev3 00:16:43.717 BaseBdev3 00:16:43.717 21:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@812 -- # for bdev in "${base_bdevs[@]}" 00:16:43.717 21:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@813 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 512 -b BaseBdev4_malloc 00:16:43.975 BaseBdev4_malloc 00:16:43.975 21:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@814 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_create BaseBdev4_malloc 00:16:44.234 true 00:16:44.234 21:15:55 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@815 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b EE_BaseBdev4_malloc -p BaseBdev4 00:16:44.492 [2024-07-14 21:15:56.022160] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on EE_BaseBdev4_malloc 00:16:44.492 [2024-07-14 21:15:56.022216] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:44.492 [2024-07-14 21:15:56.022239] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1aad82435680 00:16:44.492 [2024-07-14 21:15:56.022246] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:44.492 [2024-07-14 21:15:56.022986] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:44.492 [2024-07-14 21:15:56.023009] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev4 00:16:44.492 BaseBdev4 00:16:44.751 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@819 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'BaseBdev1 BaseBdev2 BaseBdev3 BaseBdev4' -n raid_bdev1 -s 00:16:45.010 [2024-07-14 21:15:56.302327] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:45.010 [2024-07-14 21:15:56.302826] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:45.010 [2024-07-14 21:15:56.302850] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev3 is claimed 00:16:45.010 [2024-07-14 21:15:56.302865] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev4 is claimed 00:16:45.010 [2024-07-14 21:15:56.302971] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x1aad82435900 00:16:45.010 [2024-07-14 21:15:56.302977] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 63488, blocklen 512 00:16:45.010 [2024-07-14 21:15:56.303009] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1aad824a0e20 00:16:45.010 [2024-07-14 21:15:56.303099] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x1aad82435900 00:16:45.010 [2024-07-14 21:15:56.303103] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x1aad82435900 00:16:45.010 [2024-07-14 21:15:56.303131] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:45.010 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@820 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 4 00:16:45.010 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:45.010 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:45.010 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:45.010 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:45.010 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=4 00:16:45.010 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:45.010 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:45.010 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:45.010 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:45.010 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:45.010 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:45.269 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:45.269 "name": "raid_bdev1", 00:16:45.269 "uuid": "42766f6c-4226-11ef-aa83-81fbc7dfef58", 00:16:45.269 "strip_size_kb": 0, 00:16:45.269 "state": "online", 00:16:45.269 "raid_level": "raid1", 00:16:45.269 "superblock": true, 00:16:45.269 "num_base_bdevs": 4, 00:16:45.269 "num_base_bdevs_discovered": 4, 00:16:45.270 "num_base_bdevs_operational": 4, 00:16:45.270 "base_bdevs_list": [ 00:16:45.270 { 00:16:45.270 "name": "BaseBdev1", 00:16:45.270 "uuid": "c9444e4a-0e14-ec5c-b149-2267c7d4939e", 00:16:45.270 "is_configured": true, 00:16:45.270 "data_offset": 2048, 00:16:45.270 "data_size": 63488 00:16:45.270 }, 00:16:45.270 { 00:16:45.270 "name": "BaseBdev2", 00:16:45.270 "uuid": "c7bcfe01-b37d-3256-a773-5c56d75c3c85", 00:16:45.270 "is_configured": true, 00:16:45.270 "data_offset": 2048, 00:16:45.270 "data_size": 63488 00:16:45.270 }, 00:16:45.270 { 00:16:45.270 "name": "BaseBdev3", 00:16:45.270 "uuid": "3325786d-c183-f456-b1f3-2c988be4f99d", 00:16:45.270 "is_configured": true, 00:16:45.270 "data_offset": 2048, 00:16:45.270 "data_size": 63488 00:16:45.270 }, 00:16:45.270 { 00:16:45.270 "name": "BaseBdev4", 00:16:45.270 "uuid": "268e91a0-9eb6-c454-8c27-eb97d8e6268a", 00:16:45.270 "is_configured": true, 00:16:45.270 "data_offset": 2048, 00:16:45.270 "data_size": 63488 00:16:45.270 } 00:16:45.270 ] 00:16:45.270 }' 00:16:45.270 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:45.270 21:15:56 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:45.528 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@824 -- # sleep 1 00:16:45.528 21:15:56 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@823 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/spdk-raid.sock perform_tests 00:16:45.528 [2024-07-14 21:15:57.042841] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x1aad824a0ec0 00:16:46.466 21:15:57 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@827 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_error_inject_error EE_BaseBdev1_malloc write failure 00:16:47.034 [2024-07-14 21:15:58.285742] bdev_raid.c:2222:_raid_bdev_fail_base_bdev: *NOTICE*: Failing base bdev in slot 0 ('BaseBdev1') of raid bdev 'raid_bdev1' 00:16:47.034 [2024-07-14 21:15:58.285868] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:47.034 [2024-07-14 21:15:58.286017] bdev_raid.c:1919:raid_bdev_channel_remove_base_bdev: *DEBUG*: slot: 0 raid_ch: 0x1aad824a0ec0 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@829 -- # local expected_num_base_bdevs 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ raid1 = \r\a\i\d\1 ]] 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@830 -- # [[ write = \w\r\i\t\e ]] 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@831 -- # expected_num_base_bdevs=3 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@835 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 3 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=3 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:47.034 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:47.294 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:47.294 "name": "raid_bdev1", 00:16:47.294 "uuid": "42766f6c-4226-11ef-aa83-81fbc7dfef58", 00:16:47.294 "strip_size_kb": 0, 00:16:47.294 "state": "online", 00:16:47.294 "raid_level": "raid1", 00:16:47.294 "superblock": true, 00:16:47.294 "num_base_bdevs": 4, 00:16:47.294 "num_base_bdevs_discovered": 3, 00:16:47.294 "num_base_bdevs_operational": 3, 00:16:47.294 "base_bdevs_list": [ 00:16:47.294 { 00:16:47.294 "name": null, 00:16:47.294 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:47.294 "is_configured": false, 00:16:47.294 "data_offset": 2048, 00:16:47.294 "data_size": 63488 00:16:47.294 }, 00:16:47.294 { 00:16:47.294 "name": "BaseBdev2", 00:16:47.294 "uuid": "c7bcfe01-b37d-3256-a773-5c56d75c3c85", 00:16:47.294 "is_configured": true, 00:16:47.294 "data_offset": 2048, 00:16:47.294 "data_size": 63488 00:16:47.294 }, 00:16:47.294 { 00:16:47.294 "name": "BaseBdev3", 00:16:47.294 "uuid": "3325786d-c183-f456-b1f3-2c988be4f99d", 00:16:47.294 "is_configured": true, 00:16:47.294 "data_offset": 2048, 00:16:47.294 "data_size": 63488 00:16:47.294 }, 00:16:47.294 { 00:16:47.294 "name": "BaseBdev4", 00:16:47.294 "uuid": "268e91a0-9eb6-c454-8c27-eb97d8e6268a", 00:16:47.294 "is_configured": true, 00:16:47.294 "data_offset": 2048, 00:16:47.294 "data_size": 63488 00:16:47.294 } 00:16:47.294 ] 00:16:47.294 }' 00:16:47.294 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:47.294 21:15:58 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:47.553 21:15:58 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@837 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:16:47.813 [2024-07-14 21:15:59.233025] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:16:47.813 [2024-07-14 21:15:59.233052] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:47.813 [2024-07-14 21:15:59.233374] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:47.813 [2024-07-14 21:15:59.233384] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:47.813 [2024-07-14 21:15:59.233399] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:47.813 [2024-07-14 21:15:59.233403] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x1aad82435900 name raid_bdev1, state offline 00:16:47.813 0 00:16:47.813 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@839 -- # killprocess 65261 00:16:47.813 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@948 -- # '[' -z 65261 ']' 00:16:47.813 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@952 -- # kill -0 65261 00:16:47.813 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # uname 00:16:47.813 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:47.813 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # ps -c -o command 65261 00:16:47.813 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # tail -1 00:16:47.813 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:16:47.813 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:16:47.813 killing process with pid 65261 00:16:47.813 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65261' 00:16:47.813 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@967 -- # kill 65261 00:16:47.813 [2024-07-14 21:15:59.266085] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:47.813 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@972 -- # wait 65261 00:16:47.813 [2024-07-14 21:15:59.293876] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:48.072 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep -v Job /raidtest/tmp.n4LdQj3274 00:16:48.072 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # awk '{print $6}' 00:16:48.072 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # grep raid_bdev1 00:16:48.072 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@843 -- # fail_per_s=0.00 00:16:48.072 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@844 -- # has_redundancy raid1 00:16:48.072 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:48.072 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@214 -- # return 0 00:16:48.072 21:15:59 bdev_raid.raid_write_error_test -- bdev/bdev_raid.sh@845 -- # [[ 0.00 = \0\.\0\0 ]] 00:16:48.072 00:16:48.072 real 0m7.884s 00:16:48.072 user 0m12.664s 00:16:48.072 sys 0m1.305s 00:16:48.072 ************************************ 00:16:48.072 END TEST raid_write_error_test 00:16:48.072 ************************************ 00:16:48.072 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:48.072 21:15:59 bdev_raid.raid_write_error_test -- common/autotest_common.sh@10 -- # set +x 00:16:48.072 21:15:59 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:48.072 21:15:59 bdev_raid -- bdev/bdev_raid.sh@875 -- # '[' '' = true ']' 00:16:48.072 21:15:59 bdev_raid -- bdev/bdev_raid.sh@884 -- # '[' n == y ']' 00:16:48.072 21:15:59 bdev_raid -- bdev/bdev_raid.sh@896 -- # base_blocklen=4096 00:16:48.072 21:15:59 bdev_raid -- bdev/bdev_raid.sh@898 -- # run_test raid_state_function_test_sb_4k raid_state_function_test raid1 2 true 00:16:48.072 21:15:59 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:16:48.072 21:15:59 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:48.072 21:15:59 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:48.072 ************************************ 00:16:48.072 START TEST raid_state_function_test_sb_4k 00:16:48.072 ************************************ 00:16:48.072 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:16:48.072 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:16:48.072 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:16:48.072 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:16:48.072 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:16:48.072 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:16:48.072 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:48.072 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:16:48.072 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:48.072 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:48.072 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:16:48.072 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@226 -- # local strip_size 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@244 -- # raid_pid=65397 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 65397' 00:16:48.073 Process raid pid: 65397 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@246 -- # waitforlisten 65397 /var/tmp/spdk-raid.sock 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@829 -- # '[' -z 65397 ']' 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:48.073 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:48.073 21:15:59 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:48.073 [2024-07-14 21:15:59.567015] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:48.073 [2024-07-14 21:15:59.567282] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:48.640 EAL: TSC is not safe to use in SMP mode 00:16:48.640 EAL: TSC is not invariant 00:16:48.640 [2024-07-14 21:16:00.176988] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:48.899 [2024-07-14 21:16:00.282665] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:48.899 [2024-07-14 21:16:00.285105] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:48.899 [2024-07-14 21:16:00.286009] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:48.899 [2024-07-14 21:16:00.286041] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:49.158 21:16:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:49.158 21:16:00 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@862 -- # return 0 00:16:49.158 21:16:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:49.417 [2024-07-14 21:16:00.905131] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:49.417 [2024-07-14 21:16:00.905207] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:49.417 [2024-07-14 21:16:00.905228] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:49.417 [2024-07-14 21:16:00.905236] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:49.417 21:16:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:49.417 21:16:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:49.417 21:16:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:49.417 21:16:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:49.417 21:16:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:49.417 21:16:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:49.417 21:16:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:49.417 21:16:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:49.417 21:16:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:49.417 21:16:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:49.417 21:16:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:49.417 21:16:00 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:49.675 21:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:49.675 "name": "Existed_Raid", 00:16:49.675 "uuid": "4534c485-4226-11ef-aa83-81fbc7dfef58", 00:16:49.675 "strip_size_kb": 0, 00:16:49.675 "state": "configuring", 00:16:49.675 "raid_level": "raid1", 00:16:49.675 "superblock": true, 00:16:49.675 "num_base_bdevs": 2, 00:16:49.675 "num_base_bdevs_discovered": 0, 00:16:49.675 "num_base_bdevs_operational": 2, 00:16:49.675 "base_bdevs_list": [ 00:16:49.675 { 00:16:49.675 "name": "BaseBdev1", 00:16:49.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.675 "is_configured": false, 00:16:49.675 "data_offset": 0, 00:16:49.675 "data_size": 0 00:16:49.675 }, 00:16:49.675 { 00:16:49.675 "name": "BaseBdev2", 00:16:49.675 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:49.675 "is_configured": false, 00:16:49.675 "data_offset": 0, 00:16:49.675 "data_size": 0 00:16:49.675 } 00:16:49.675 ] 00:16:49.675 }' 00:16:49.675 21:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:49.675 21:16:01 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:50.240 21:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:50.240 [2024-07-14 21:16:01.761381] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:50.240 [2024-07-14 21:16:01.761405] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3982b2a34500 name Existed_Raid, state configuring 00:16:50.240 21:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:50.497 [2024-07-14 21:16:01.965408] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:16:50.497 [2024-07-14 21:16:01.965441] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:16:50.497 [2024-07-14 21:16:01.965445] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:50.497 [2024-07-14 21:16:01.965469] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:50.497 21:16:01 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev1 00:16:50.755 [2024-07-14 21:16:02.214304] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:50.755 BaseBdev1 00:16:50.755 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:16:50.755 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:16:50.755 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:50.755 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:16:50.755 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:50.755 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:50.755 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:51.013 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:16:51.270 [ 00:16:51.270 { 00:16:51.270 "name": "BaseBdev1", 00:16:51.270 "aliases": [ 00:16:51.270 "45fc6613-4226-11ef-aa83-81fbc7dfef58" 00:16:51.270 ], 00:16:51.270 "product_name": "Malloc disk", 00:16:51.270 "block_size": 4096, 00:16:51.270 "num_blocks": 8192, 00:16:51.270 "uuid": "45fc6613-4226-11ef-aa83-81fbc7dfef58", 00:16:51.271 "assigned_rate_limits": { 00:16:51.271 "rw_ios_per_sec": 0, 00:16:51.271 "rw_mbytes_per_sec": 0, 00:16:51.271 "r_mbytes_per_sec": 0, 00:16:51.271 "w_mbytes_per_sec": 0 00:16:51.271 }, 00:16:51.271 "claimed": true, 00:16:51.271 "claim_type": "exclusive_write", 00:16:51.271 "zoned": false, 00:16:51.271 "supported_io_types": { 00:16:51.271 "read": true, 00:16:51.271 "write": true, 00:16:51.271 "unmap": true, 00:16:51.271 "flush": true, 00:16:51.271 "reset": true, 00:16:51.271 "nvme_admin": false, 00:16:51.271 "nvme_io": false, 00:16:51.271 "nvme_io_md": false, 00:16:51.271 "write_zeroes": true, 00:16:51.271 "zcopy": true, 00:16:51.271 "get_zone_info": false, 00:16:51.271 "zone_management": false, 00:16:51.271 "zone_append": false, 00:16:51.271 "compare": false, 00:16:51.271 "compare_and_write": false, 00:16:51.271 "abort": true, 00:16:51.271 "seek_hole": false, 00:16:51.271 "seek_data": false, 00:16:51.271 "copy": true, 00:16:51.271 "nvme_iov_md": false 00:16:51.271 }, 00:16:51.271 "memory_domains": [ 00:16:51.271 { 00:16:51.271 "dma_device_id": "system", 00:16:51.271 "dma_device_type": 1 00:16:51.271 }, 00:16:51.271 { 00:16:51.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:51.271 "dma_device_type": 2 00:16:51.271 } 00:16:51.271 ], 00:16:51.271 "driver_specific": {} 00:16:51.271 } 00:16:51.271 ] 00:16:51.271 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:16:51.271 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:51.271 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:51.271 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:51.271 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:51.271 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:51.271 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:51.271 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:51.271 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:51.271 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:51.271 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:51.271 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:51.271 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:51.529 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:51.529 "name": "Existed_Raid", 00:16:51.529 "uuid": "45d68d9d-4226-11ef-aa83-81fbc7dfef58", 00:16:51.529 "strip_size_kb": 0, 00:16:51.529 "state": "configuring", 00:16:51.529 "raid_level": "raid1", 00:16:51.529 "superblock": true, 00:16:51.529 "num_base_bdevs": 2, 00:16:51.529 "num_base_bdevs_discovered": 1, 00:16:51.529 "num_base_bdevs_operational": 2, 00:16:51.529 "base_bdevs_list": [ 00:16:51.529 { 00:16:51.529 "name": "BaseBdev1", 00:16:51.529 "uuid": "45fc6613-4226-11ef-aa83-81fbc7dfef58", 00:16:51.529 "is_configured": true, 00:16:51.529 "data_offset": 256, 00:16:51.529 "data_size": 7936 00:16:51.529 }, 00:16:51.529 { 00:16:51.529 "name": "BaseBdev2", 00:16:51.529 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:51.529 "is_configured": false, 00:16:51.529 "data_offset": 0, 00:16:51.529 "data_size": 0 00:16:51.529 } 00:16:51.529 ] 00:16:51.529 }' 00:16:51.529 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:51.529 21:16:02 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:51.787 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:16:52.045 [2024-07-14 21:16:03.441483] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:16:52.045 [2024-07-14 21:16:03.441525] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3982b2a34500 name Existed_Raid, state configuring 00:16:52.045 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:16:52.303 [2024-07-14 21:16:03.653564] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:16:52.303 [2024-07-14 21:16:03.654524] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:16:52.303 [2024-07-14 21:16:03.654606] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:16:52.303 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:16:52.303 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:52.303 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:16:52.303 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:52.303 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:16:52.303 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:52.303 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:52.303 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:52.303 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:52.303 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:52.303 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:52.303 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:52.303 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:52.303 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:52.561 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:52.561 "name": "Existed_Raid", 00:16:52.561 "uuid": "46d82510-4226-11ef-aa83-81fbc7dfef58", 00:16:52.561 "strip_size_kb": 0, 00:16:52.561 "state": "configuring", 00:16:52.561 "raid_level": "raid1", 00:16:52.561 "superblock": true, 00:16:52.561 "num_base_bdevs": 2, 00:16:52.561 "num_base_bdevs_discovered": 1, 00:16:52.561 "num_base_bdevs_operational": 2, 00:16:52.561 "base_bdevs_list": [ 00:16:52.561 { 00:16:52.561 "name": "BaseBdev1", 00:16:52.561 "uuid": "45fc6613-4226-11ef-aa83-81fbc7dfef58", 00:16:52.561 "is_configured": true, 00:16:52.561 "data_offset": 256, 00:16:52.561 "data_size": 7936 00:16:52.562 }, 00:16:52.562 { 00:16:52.562 "name": "BaseBdev2", 00:16:52.562 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:52.562 "is_configured": false, 00:16:52.562 "data_offset": 0, 00:16:52.562 "data_size": 0 00:16:52.562 } 00:16:52.562 ] 00:16:52.562 }' 00:16:52.562 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:52.562 21:16:03 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:52.820 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b BaseBdev2 00:16:53.079 [2024-07-14 21:16:04.437784] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:16:53.079 [2024-07-14 21:16:04.437862] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3982b2a34a00 00:16:53.079 [2024-07-14 21:16:04.437868] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:53.079 [2024-07-14 21:16:04.437888] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3982b2a97e20 00:16:53.079 [2024-07-14 21:16:04.437938] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3982b2a34a00 00:16:53.079 [2024-07-14 21:16:04.437943] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x3982b2a34a00 00:16:53.079 [2024-07-14 21:16:04.437965] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:53.079 BaseBdev2 00:16:53.079 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:16:53.079 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:16:53.079 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:16:53.079 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@899 -- # local i 00:16:53.079 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:16:53.079 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:16:53.079 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:16:53.336 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:16:53.594 [ 00:16:53.594 { 00:16:53.594 "name": "BaseBdev2", 00:16:53.594 "aliases": [ 00:16:53.594 "474fc853-4226-11ef-aa83-81fbc7dfef58" 00:16:53.594 ], 00:16:53.594 "product_name": "Malloc disk", 00:16:53.594 "block_size": 4096, 00:16:53.594 "num_blocks": 8192, 00:16:53.594 "uuid": "474fc853-4226-11ef-aa83-81fbc7dfef58", 00:16:53.594 "assigned_rate_limits": { 00:16:53.594 "rw_ios_per_sec": 0, 00:16:53.594 "rw_mbytes_per_sec": 0, 00:16:53.594 "r_mbytes_per_sec": 0, 00:16:53.594 "w_mbytes_per_sec": 0 00:16:53.594 }, 00:16:53.594 "claimed": true, 00:16:53.594 "claim_type": "exclusive_write", 00:16:53.594 "zoned": false, 00:16:53.594 "supported_io_types": { 00:16:53.594 "read": true, 00:16:53.594 "write": true, 00:16:53.594 "unmap": true, 00:16:53.594 "flush": true, 00:16:53.594 "reset": true, 00:16:53.594 "nvme_admin": false, 00:16:53.594 "nvme_io": false, 00:16:53.594 "nvme_io_md": false, 00:16:53.594 "write_zeroes": true, 00:16:53.594 "zcopy": true, 00:16:53.594 "get_zone_info": false, 00:16:53.594 "zone_management": false, 00:16:53.594 "zone_append": false, 00:16:53.594 "compare": false, 00:16:53.594 "compare_and_write": false, 00:16:53.594 "abort": true, 00:16:53.594 "seek_hole": false, 00:16:53.594 "seek_data": false, 00:16:53.594 "copy": true, 00:16:53.594 "nvme_iov_md": false 00:16:53.594 }, 00:16:53.594 "memory_domains": [ 00:16:53.594 { 00:16:53.594 "dma_device_id": "system", 00:16:53.594 "dma_device_type": 1 00:16:53.594 }, 00:16:53.594 { 00:16:53.594 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:53.594 "dma_device_type": 2 00:16:53.594 } 00:16:53.594 ], 00:16:53.594 "driver_specific": {} 00:16:53.594 } 00:16:53.594 ] 00:16:53.594 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@905 -- # return 0 00:16:53.594 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:16:53.594 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:16:53.594 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:16:53.594 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:53.594 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:53.594 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:53.594 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:53.594 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:53.594 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:53.594 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:53.594 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:53.594 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:53.594 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:53.594 21:16:04 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:53.852 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:53.852 "name": "Existed_Raid", 00:16:53.852 "uuid": "46d82510-4226-11ef-aa83-81fbc7dfef58", 00:16:53.852 "strip_size_kb": 0, 00:16:53.852 "state": "online", 00:16:53.852 "raid_level": "raid1", 00:16:53.852 "superblock": true, 00:16:53.852 "num_base_bdevs": 2, 00:16:53.852 "num_base_bdevs_discovered": 2, 00:16:53.852 "num_base_bdevs_operational": 2, 00:16:53.852 "base_bdevs_list": [ 00:16:53.852 { 00:16:53.852 "name": "BaseBdev1", 00:16:53.852 "uuid": "45fc6613-4226-11ef-aa83-81fbc7dfef58", 00:16:53.852 "is_configured": true, 00:16:53.852 "data_offset": 256, 00:16:53.852 "data_size": 7936 00:16:53.852 }, 00:16:53.852 { 00:16:53.852 "name": "BaseBdev2", 00:16:53.852 "uuid": "474fc853-4226-11ef-aa83-81fbc7dfef58", 00:16:53.852 "is_configured": true, 00:16:53.852 "data_offset": 256, 00:16:53.852 "data_size": 7936 00:16:53.852 } 00:16:53.852 ] 00:16:53.852 }' 00:16:53.852 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:53.852 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:54.109 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:16:54.110 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:16:54.110 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:54.110 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:54.110 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:54.110 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@198 -- # local name 00:16:54.110 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:16:54.110 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:54.368 [2024-07-14 21:16:05.713626] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:54.368 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:54.368 "name": "Existed_Raid", 00:16:54.368 "aliases": [ 00:16:54.368 "46d82510-4226-11ef-aa83-81fbc7dfef58" 00:16:54.368 ], 00:16:54.368 "product_name": "Raid Volume", 00:16:54.368 "block_size": 4096, 00:16:54.368 "num_blocks": 7936, 00:16:54.368 "uuid": "46d82510-4226-11ef-aa83-81fbc7dfef58", 00:16:54.368 "assigned_rate_limits": { 00:16:54.368 "rw_ios_per_sec": 0, 00:16:54.368 "rw_mbytes_per_sec": 0, 00:16:54.368 "r_mbytes_per_sec": 0, 00:16:54.368 "w_mbytes_per_sec": 0 00:16:54.368 }, 00:16:54.368 "claimed": false, 00:16:54.368 "zoned": false, 00:16:54.368 "supported_io_types": { 00:16:54.368 "read": true, 00:16:54.368 "write": true, 00:16:54.368 "unmap": false, 00:16:54.368 "flush": false, 00:16:54.368 "reset": true, 00:16:54.368 "nvme_admin": false, 00:16:54.368 "nvme_io": false, 00:16:54.368 "nvme_io_md": false, 00:16:54.368 "write_zeroes": true, 00:16:54.368 "zcopy": false, 00:16:54.368 "get_zone_info": false, 00:16:54.368 "zone_management": false, 00:16:54.368 "zone_append": false, 00:16:54.368 "compare": false, 00:16:54.368 "compare_and_write": false, 00:16:54.368 "abort": false, 00:16:54.368 "seek_hole": false, 00:16:54.368 "seek_data": false, 00:16:54.368 "copy": false, 00:16:54.368 "nvme_iov_md": false 00:16:54.368 }, 00:16:54.368 "memory_domains": [ 00:16:54.368 { 00:16:54.368 "dma_device_id": "system", 00:16:54.368 "dma_device_type": 1 00:16:54.368 }, 00:16:54.368 { 00:16:54.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.368 "dma_device_type": 2 00:16:54.368 }, 00:16:54.368 { 00:16:54.368 "dma_device_id": "system", 00:16:54.368 "dma_device_type": 1 00:16:54.368 }, 00:16:54.368 { 00:16:54.368 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.368 "dma_device_type": 2 00:16:54.368 } 00:16:54.368 ], 00:16:54.368 "driver_specific": { 00:16:54.368 "raid": { 00:16:54.368 "uuid": "46d82510-4226-11ef-aa83-81fbc7dfef58", 00:16:54.368 "strip_size_kb": 0, 00:16:54.368 "state": "online", 00:16:54.368 "raid_level": "raid1", 00:16:54.368 "superblock": true, 00:16:54.368 "num_base_bdevs": 2, 00:16:54.368 "num_base_bdevs_discovered": 2, 00:16:54.368 "num_base_bdevs_operational": 2, 00:16:54.368 "base_bdevs_list": [ 00:16:54.368 { 00:16:54.368 "name": "BaseBdev1", 00:16:54.368 "uuid": "45fc6613-4226-11ef-aa83-81fbc7dfef58", 00:16:54.368 "is_configured": true, 00:16:54.368 "data_offset": 256, 00:16:54.368 "data_size": 7936 00:16:54.368 }, 00:16:54.368 { 00:16:54.368 "name": "BaseBdev2", 00:16:54.368 "uuid": "474fc853-4226-11ef-aa83-81fbc7dfef58", 00:16:54.368 "is_configured": true, 00:16:54.368 "data_offset": 256, 00:16:54.368 "data_size": 7936 00:16:54.368 } 00:16:54.368 ] 00:16:54.368 } 00:16:54.368 } 00:16:54.368 }' 00:16:54.368 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:54.368 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:16:54.368 BaseBdev2' 00:16:54.368 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:54.368 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:16:54.368 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:54.627 "name": "BaseBdev1", 00:16:54.627 "aliases": [ 00:16:54.627 "45fc6613-4226-11ef-aa83-81fbc7dfef58" 00:16:54.627 ], 00:16:54.627 "product_name": "Malloc disk", 00:16:54.627 "block_size": 4096, 00:16:54.627 "num_blocks": 8192, 00:16:54.627 "uuid": "45fc6613-4226-11ef-aa83-81fbc7dfef58", 00:16:54.627 "assigned_rate_limits": { 00:16:54.627 "rw_ios_per_sec": 0, 00:16:54.627 "rw_mbytes_per_sec": 0, 00:16:54.627 "r_mbytes_per_sec": 0, 00:16:54.627 "w_mbytes_per_sec": 0 00:16:54.627 }, 00:16:54.627 "claimed": true, 00:16:54.627 "claim_type": "exclusive_write", 00:16:54.627 "zoned": false, 00:16:54.627 "supported_io_types": { 00:16:54.627 "read": true, 00:16:54.627 "write": true, 00:16:54.627 "unmap": true, 00:16:54.627 "flush": true, 00:16:54.627 "reset": true, 00:16:54.627 "nvme_admin": false, 00:16:54.627 "nvme_io": false, 00:16:54.627 "nvme_io_md": false, 00:16:54.627 "write_zeroes": true, 00:16:54.627 "zcopy": true, 00:16:54.627 "get_zone_info": false, 00:16:54.627 "zone_management": false, 00:16:54.627 "zone_append": false, 00:16:54.627 "compare": false, 00:16:54.627 "compare_and_write": false, 00:16:54.627 "abort": true, 00:16:54.627 "seek_hole": false, 00:16:54.627 "seek_data": false, 00:16:54.627 "copy": true, 00:16:54.627 "nvme_iov_md": false 00:16:54.627 }, 00:16:54.627 "memory_domains": [ 00:16:54.627 { 00:16:54.627 "dma_device_id": "system", 00:16:54.627 "dma_device_type": 1 00:16:54.627 }, 00:16:54.627 { 00:16:54.627 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.627 "dma_device_type": 2 00:16:54.627 } 00:16:54.627 ], 00:16:54.627 "driver_specific": {} 00:16:54.627 }' 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:16:54.627 21:16:05 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:54.886 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:54.886 "name": "BaseBdev2", 00:16:54.886 "aliases": [ 00:16:54.886 "474fc853-4226-11ef-aa83-81fbc7dfef58" 00:16:54.886 ], 00:16:54.886 "product_name": "Malloc disk", 00:16:54.886 "block_size": 4096, 00:16:54.886 "num_blocks": 8192, 00:16:54.886 "uuid": "474fc853-4226-11ef-aa83-81fbc7dfef58", 00:16:54.886 "assigned_rate_limits": { 00:16:54.886 "rw_ios_per_sec": 0, 00:16:54.886 "rw_mbytes_per_sec": 0, 00:16:54.886 "r_mbytes_per_sec": 0, 00:16:54.886 "w_mbytes_per_sec": 0 00:16:54.886 }, 00:16:54.886 "claimed": true, 00:16:54.886 "claim_type": "exclusive_write", 00:16:54.886 "zoned": false, 00:16:54.886 "supported_io_types": { 00:16:54.886 "read": true, 00:16:54.886 "write": true, 00:16:54.886 "unmap": true, 00:16:54.886 "flush": true, 00:16:54.886 "reset": true, 00:16:54.886 "nvme_admin": false, 00:16:54.886 "nvme_io": false, 00:16:54.886 "nvme_io_md": false, 00:16:54.886 "write_zeroes": true, 00:16:54.886 "zcopy": true, 00:16:54.886 "get_zone_info": false, 00:16:54.886 "zone_management": false, 00:16:54.886 "zone_append": false, 00:16:54.886 "compare": false, 00:16:54.886 "compare_and_write": false, 00:16:54.886 "abort": true, 00:16:54.886 "seek_hole": false, 00:16:54.886 "seek_data": false, 00:16:54.886 "copy": true, 00:16:54.886 "nvme_iov_md": false 00:16:54.886 }, 00:16:54.886 "memory_domains": [ 00:16:54.886 { 00:16:54.886 "dma_device_id": "system", 00:16:54.886 "dma_device_type": 1 00:16:54.886 }, 00:16:54.886 { 00:16:54.886 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:54.886 "dma_device_type": 2 00:16:54.886 } 00:16:54.886 ], 00:16:54.886 "driver_specific": {} 00:16:54.886 }' 00:16:54.886 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.886 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:54.886 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:16:54.886 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.886 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:54.886 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:54.886 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.886 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:54.886 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:54.886 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.886 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:54.886 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:54.886 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:16:55.145 [2024-07-14 21:16:06.465658] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@275 -- # local expected_state 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:16:55.145 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.403 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:55.403 "name": "Existed_Raid", 00:16:55.403 "uuid": "46d82510-4226-11ef-aa83-81fbc7dfef58", 00:16:55.403 "strip_size_kb": 0, 00:16:55.403 "state": "online", 00:16:55.403 "raid_level": "raid1", 00:16:55.403 "superblock": true, 00:16:55.403 "num_base_bdevs": 2, 00:16:55.403 "num_base_bdevs_discovered": 1, 00:16:55.403 "num_base_bdevs_operational": 1, 00:16:55.403 "base_bdevs_list": [ 00:16:55.403 { 00:16:55.403 "name": null, 00:16:55.403 "uuid": "00000000-0000-0000-0000-000000000000", 00:16:55.403 "is_configured": false, 00:16:55.403 "data_offset": 256, 00:16:55.403 "data_size": 7936 00:16:55.403 }, 00:16:55.403 { 00:16:55.403 "name": "BaseBdev2", 00:16:55.403 "uuid": "474fc853-4226-11ef-aa83-81fbc7dfef58", 00:16:55.403 "is_configured": true, 00:16:55.403 "data_offset": 256, 00:16:55.403 "data_size": 7936 00:16:55.403 } 00:16:55.403 ] 00:16:55.403 }' 00:16:55.403 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:55.403 21:16:06 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:55.661 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:16:55.661 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:55.661 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:55.661 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:16:55.919 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:16:55.919 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:16:55.919 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:16:56.177 [2024-07-14 21:16:07.479761] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:16:56.177 [2024-07-14 21:16:07.479797] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:16:56.177 [2024-07-14 21:16:07.486034] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:16:56.177 [2024-07-14 21:16:07.486050] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:16:56.177 [2024-07-14 21:16:07.486071] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3982b2a34a00 name Existed_Raid, state offline 00:16:56.177 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:16:56.177 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:16:56.177 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:56.177 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@341 -- # killprocess 65397 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@948 -- # '[' -z 65397 ']' 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@952 -- # kill -0 65397 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # uname 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # ps -c -o command 65397 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # tail -1 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:16:56.435 killing process with pid 65397 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65397' 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@967 -- # kill 65397 00:16:56.435 [2024-07-14 21:16:07.756184] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@972 -- # wait 65397 00:16:56.435 [2024-07-14 21:16:07.756221] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:16:56.435 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- bdev/bdev_raid.sh@343 -- # return 0 00:16:56.435 00:16:56.435 real 0m8.368s 00:16:56.436 user 0m14.324s 00:16:56.436 sys 0m1.657s 00:16:56.436 ************************************ 00:16:56.436 END TEST raid_state_function_test_sb_4k 00:16:56.436 ************************************ 00:16:56.436 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:56.436 21:16:07 bdev_raid.raid_state_function_test_sb_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.436 21:16:07 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:16:56.436 21:16:07 bdev_raid -- bdev/bdev_raid.sh@899 -- # run_test raid_superblock_test_4k raid_superblock_test raid1 2 00:16:56.436 21:16:07 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:16:56.436 21:16:07 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:56.436 21:16:07 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:16:56.436 ************************************ 00:16:56.436 START TEST raid_superblock_test_4k 00:16:56.436 ************************************ 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@398 -- # local strip_size 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@411 -- # raid_pid=65667 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@412 -- # waitforlisten 65667 /var/tmp/spdk-raid.sock 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@829 -- # '[' -z 65667 ']' 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:56.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:56.436 21:16:07 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:56.436 [2024-07-14 21:16:07.981285] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:16:56.436 [2024-07-14 21:16:07.981571] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:16:57.003 EAL: TSC is not safe to use in SMP mode 00:16:57.003 EAL: TSC is not invariant 00:16:57.003 [2024-07-14 21:16:08.517973] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:57.261 [2024-07-14 21:16:08.600876] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:16:57.261 [2024-07-14 21:16:08.603037] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:16:57.261 [2024-07-14 21:16:08.603938] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.261 [2024-07-14 21:16:08.603953] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:16:57.520 21:16:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:57.520 21:16:08 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@862 -- # return 0 00:16:57.520 21:16:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:16:57.520 21:16:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:57.520 21:16:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:16:57.520 21:16:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:16:57.520 21:16:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:16:57.520 21:16:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:57.520 21:16:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:57.520 21:16:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:57.520 21:16:08 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc1 00:16:57.778 malloc1 00:16:57.778 21:16:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:16:58.036 [2024-07-14 21:16:09.379769] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:16:58.036 [2024-07-14 21:16:09.379843] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.036 [2024-07-14 21:16:09.379853] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2309df034780 00:16:58.036 [2024-07-14 21:16:09.379860] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.036 [2024-07-14 21:16:09.380972] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.036 [2024-07-14 21:16:09.381012] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:16:58.036 pt1 00:16:58.036 21:16:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:58.036 21:16:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:58.036 21:16:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:16:58.036 21:16:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:16:58.036 21:16:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:16:58.036 21:16:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:16:58.036 21:16:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:16:58.036 21:16:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:16:58.036 21:16:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -b malloc2 00:16:58.297 malloc2 00:16:58.297 21:16:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:16:58.556 [2024-07-14 21:16:09.859794] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:16:58.556 [2024-07-14 21:16:09.859872] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:16:58.556 [2024-07-14 21:16:09.859886] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2309df034c80 00:16:58.556 [2024-07-14 21:16:09.859897] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:16:58.556 [2024-07-14 21:16:09.860762] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:16:58.556 [2024-07-14 21:16:09.860800] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:16:58.556 pt2 00:16:58.556 21:16:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:16:58.556 21:16:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:16:58.556 21:16:09 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:16:58.815 [2024-07-14 21:16:10.127794] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:16:58.815 [2024-07-14 21:16:10.128284] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:16:58.815 [2024-07-14 21:16:10.128341] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2309df034f00 00:16:58.815 [2024-07-14 21:16:10.128348] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:16:58.815 [2024-07-14 21:16:10.128419] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2309df097e20 00:16:58.815 [2024-07-14 21:16:10.128484] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2309df034f00 00:16:58.815 [2024-07-14 21:16:10.128489] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2309df034f00 00:16:58.815 [2024-07-14 21:16:10.128512] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:16:58.815 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:16:58.815 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:16:58.815 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:16:58.815 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:16:58.815 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:16:58.815 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:16:58.815 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:16:58.815 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:16:58.815 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:16:58.815 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:16:58.815 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:16:58.815 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:16:59.074 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:16:59.074 "name": "raid_bdev1", 00:16:59.074 "uuid": "4ab408aa-4226-11ef-aa83-81fbc7dfef58", 00:16:59.074 "strip_size_kb": 0, 00:16:59.074 "state": "online", 00:16:59.074 "raid_level": "raid1", 00:16:59.074 "superblock": true, 00:16:59.074 "num_base_bdevs": 2, 00:16:59.074 "num_base_bdevs_discovered": 2, 00:16:59.074 "num_base_bdevs_operational": 2, 00:16:59.074 "base_bdevs_list": [ 00:16:59.074 { 00:16:59.074 "name": "pt1", 00:16:59.074 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.074 "is_configured": true, 00:16:59.074 "data_offset": 256, 00:16:59.074 "data_size": 7936 00:16:59.074 }, 00:16:59.074 { 00:16:59.074 "name": "pt2", 00:16:59.074 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.074 "is_configured": true, 00:16:59.074 "data_offset": 256, 00:16:59.074 "data_size": 7936 00:16:59.074 } 00:16:59.074 ] 00:16:59.074 }' 00:16:59.074 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:16:59.074 21:16:10 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:16:59.332 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:16:59.332 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:16:59.332 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:16:59.332 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:16:59.332 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:16:59.332 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:16:59.332 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:16:59.332 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:16:59.590 [2024-07-14 21:16:10.967831] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:16:59.590 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:16:59.590 "name": "raid_bdev1", 00:16:59.590 "aliases": [ 00:16:59.591 "4ab408aa-4226-11ef-aa83-81fbc7dfef58" 00:16:59.591 ], 00:16:59.591 "product_name": "Raid Volume", 00:16:59.591 "block_size": 4096, 00:16:59.591 "num_blocks": 7936, 00:16:59.591 "uuid": "4ab408aa-4226-11ef-aa83-81fbc7dfef58", 00:16:59.591 "assigned_rate_limits": { 00:16:59.591 "rw_ios_per_sec": 0, 00:16:59.591 "rw_mbytes_per_sec": 0, 00:16:59.591 "r_mbytes_per_sec": 0, 00:16:59.591 "w_mbytes_per_sec": 0 00:16:59.591 }, 00:16:59.591 "claimed": false, 00:16:59.591 "zoned": false, 00:16:59.591 "supported_io_types": { 00:16:59.591 "read": true, 00:16:59.591 "write": true, 00:16:59.591 "unmap": false, 00:16:59.591 "flush": false, 00:16:59.591 "reset": true, 00:16:59.591 "nvme_admin": false, 00:16:59.591 "nvme_io": false, 00:16:59.591 "nvme_io_md": false, 00:16:59.591 "write_zeroes": true, 00:16:59.591 "zcopy": false, 00:16:59.591 "get_zone_info": false, 00:16:59.591 "zone_management": false, 00:16:59.591 "zone_append": false, 00:16:59.591 "compare": false, 00:16:59.591 "compare_and_write": false, 00:16:59.591 "abort": false, 00:16:59.591 "seek_hole": false, 00:16:59.591 "seek_data": false, 00:16:59.591 "copy": false, 00:16:59.591 "nvme_iov_md": false 00:16:59.591 }, 00:16:59.591 "memory_domains": [ 00:16:59.591 { 00:16:59.591 "dma_device_id": "system", 00:16:59.591 "dma_device_type": 1 00:16:59.591 }, 00:16:59.591 { 00:16:59.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.591 "dma_device_type": 2 00:16:59.591 }, 00:16:59.591 { 00:16:59.591 "dma_device_id": "system", 00:16:59.591 "dma_device_type": 1 00:16:59.591 }, 00:16:59.591 { 00:16:59.591 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.591 "dma_device_type": 2 00:16:59.591 } 00:16:59.591 ], 00:16:59.591 "driver_specific": { 00:16:59.591 "raid": { 00:16:59.591 "uuid": "4ab408aa-4226-11ef-aa83-81fbc7dfef58", 00:16:59.591 "strip_size_kb": 0, 00:16:59.591 "state": "online", 00:16:59.591 "raid_level": "raid1", 00:16:59.591 "superblock": true, 00:16:59.591 "num_base_bdevs": 2, 00:16:59.591 "num_base_bdevs_discovered": 2, 00:16:59.591 "num_base_bdevs_operational": 2, 00:16:59.591 "base_bdevs_list": [ 00:16:59.591 { 00:16:59.591 "name": "pt1", 00:16:59.591 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.591 "is_configured": true, 00:16:59.591 "data_offset": 256, 00:16:59.591 "data_size": 7936 00:16:59.591 }, 00:16:59.591 { 00:16:59.591 "name": "pt2", 00:16:59.591 "uuid": "00000000-0000-0000-0000-000000000002", 00:16:59.591 "is_configured": true, 00:16:59.591 "data_offset": 256, 00:16:59.591 "data_size": 7936 00:16:59.591 } 00:16:59.591 ] 00:16:59.591 } 00:16:59.591 } 00:16:59.591 }' 00:16:59.591 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:16:59.591 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:16:59.591 pt2' 00:16:59.591 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:59.591 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:16:59.591 21:16:10 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:16:59.849 "name": "pt1", 00:16:59.849 "aliases": [ 00:16:59.849 "00000000-0000-0000-0000-000000000001" 00:16:59.849 ], 00:16:59.849 "product_name": "passthru", 00:16:59.849 "block_size": 4096, 00:16:59.849 "num_blocks": 8192, 00:16:59.849 "uuid": "00000000-0000-0000-0000-000000000001", 00:16:59.849 "assigned_rate_limits": { 00:16:59.849 "rw_ios_per_sec": 0, 00:16:59.849 "rw_mbytes_per_sec": 0, 00:16:59.849 "r_mbytes_per_sec": 0, 00:16:59.849 "w_mbytes_per_sec": 0 00:16:59.849 }, 00:16:59.849 "claimed": true, 00:16:59.849 "claim_type": "exclusive_write", 00:16:59.849 "zoned": false, 00:16:59.849 "supported_io_types": { 00:16:59.849 "read": true, 00:16:59.849 "write": true, 00:16:59.849 "unmap": true, 00:16:59.849 "flush": true, 00:16:59.849 "reset": true, 00:16:59.849 "nvme_admin": false, 00:16:59.849 "nvme_io": false, 00:16:59.849 "nvme_io_md": false, 00:16:59.849 "write_zeroes": true, 00:16:59.849 "zcopy": true, 00:16:59.849 "get_zone_info": false, 00:16:59.849 "zone_management": false, 00:16:59.849 "zone_append": false, 00:16:59.849 "compare": false, 00:16:59.849 "compare_and_write": false, 00:16:59.849 "abort": true, 00:16:59.849 "seek_hole": false, 00:16:59.849 "seek_data": false, 00:16:59.849 "copy": true, 00:16:59.849 "nvme_iov_md": false 00:16:59.849 }, 00:16:59.849 "memory_domains": [ 00:16:59.849 { 00:16:59.849 "dma_device_id": "system", 00:16:59.849 "dma_device_type": 1 00:16:59.849 }, 00:16:59.849 { 00:16:59.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:16:59.849 "dma_device_type": 2 00:16:59.849 } 00:16:59.849 ], 00:16:59.849 "driver_specific": { 00:16:59.849 "passthru": { 00:16:59.849 "name": "pt1", 00:16:59.849 "base_bdev_name": "malloc1" 00:16:59.849 } 00:16:59.849 } 00:16:59.849 }' 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:16:59.849 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:00.108 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:00.108 "name": "pt2", 00:17:00.108 "aliases": [ 00:17:00.108 "00000000-0000-0000-0000-000000000002" 00:17:00.108 ], 00:17:00.108 "product_name": "passthru", 00:17:00.108 "block_size": 4096, 00:17:00.108 "num_blocks": 8192, 00:17:00.108 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:00.108 "assigned_rate_limits": { 00:17:00.108 "rw_ios_per_sec": 0, 00:17:00.108 "rw_mbytes_per_sec": 0, 00:17:00.108 "r_mbytes_per_sec": 0, 00:17:00.108 "w_mbytes_per_sec": 0 00:17:00.108 }, 00:17:00.108 "claimed": true, 00:17:00.108 "claim_type": "exclusive_write", 00:17:00.108 "zoned": false, 00:17:00.108 "supported_io_types": { 00:17:00.108 "read": true, 00:17:00.108 "write": true, 00:17:00.108 "unmap": true, 00:17:00.108 "flush": true, 00:17:00.108 "reset": true, 00:17:00.108 "nvme_admin": false, 00:17:00.108 "nvme_io": false, 00:17:00.108 "nvme_io_md": false, 00:17:00.108 "write_zeroes": true, 00:17:00.108 "zcopy": true, 00:17:00.108 "get_zone_info": false, 00:17:00.108 "zone_management": false, 00:17:00.108 "zone_append": false, 00:17:00.108 "compare": false, 00:17:00.108 "compare_and_write": false, 00:17:00.108 "abort": true, 00:17:00.108 "seek_hole": false, 00:17:00.108 "seek_data": false, 00:17:00.108 "copy": true, 00:17:00.108 "nvme_iov_md": false 00:17:00.108 }, 00:17:00.108 "memory_domains": [ 00:17:00.108 { 00:17:00.108 "dma_device_id": "system", 00:17:00.108 "dma_device_type": 1 00:17:00.108 }, 00:17:00.108 { 00:17:00.108 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:00.108 "dma_device_type": 2 00:17:00.108 } 00:17:00.108 ], 00:17:00.108 "driver_specific": { 00:17:00.108 "passthru": { 00:17:00.108 "name": "pt2", 00:17:00.108 "base_bdev_name": "malloc2" 00:17:00.108 } 00:17:00.108 } 00:17:00.108 }' 00:17:00.108 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.108 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:00.108 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:00.108 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.108 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:00.108 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:00.108 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.108 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:00.108 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:00.108 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.108 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:00.108 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:00.108 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:00.108 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:00.366 [2024-07-14 21:16:11.871825] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:00.366 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=4ab408aa-4226-11ef-aa83-81fbc7dfef58 00:17:00.366 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@435 -- # '[' -z 4ab408aa-4226-11ef-aa83-81fbc7dfef58 ']' 00:17:00.366 21:16:11 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:00.624 [2024-07-14 21:16:12.159796] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:00.624 [2024-07-14 21:16:12.159814] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:00.624 [2024-07-14 21:16:12.159857] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:00.624 [2024-07-14 21:16:12.159872] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:00.624 [2024-07-14 21:16:12.159875] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2309df034f00 name raid_bdev1, state offline 00:17:00.882 21:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:00.882 21:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:01.140 21:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:01.140 21:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:01.140 21:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:01.140 21:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:01.398 21:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:01.398 21:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:01.655 21:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:01.655 21:16:12 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:01.911 21:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:01.911 21:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:01.911 21:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@648 -- # local es=0 00:17:01.911 21:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:01.911 21:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.911 21:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.911 21:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.911 21:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.911 21:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.911 21:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:01.911 21:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:01.911 21:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:01.911 21:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:02.168 [2024-07-14 21:16:13.547832] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:02.168 [2024-07-14 21:16:13.548507] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:02.168 [2024-07-14 21:16:13.548534] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:02.168 [2024-07-14 21:16:13.548584] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:02.168 [2024-07-14 21:16:13.548596] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:02.168 [2024-07-14 21:16:13.548599] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2309df034c80 name raid_bdev1, state configuring 00:17:02.168 request: 00:17:02.168 { 00:17:02.168 "name": "raid_bdev1", 00:17:02.168 "raid_level": "raid1", 00:17:02.168 "base_bdevs": [ 00:17:02.168 "malloc1", 00:17:02.168 "malloc2" 00:17:02.168 ], 00:17:02.168 "superblock": false, 00:17:02.168 "method": "bdev_raid_create", 00:17:02.168 "req_id": 1 00:17:02.168 } 00:17:02.168 Got JSON-RPC error response 00:17:02.168 response: 00:17:02.168 { 00:17:02.168 "code": -17, 00:17:02.168 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:02.168 } 00:17:02.168 21:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@651 -- # es=1 00:17:02.168 21:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:02.168 21:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:02.168 21:16:13 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:02.168 21:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.168 21:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:02.426 21:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:02.426 21:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:02.426 21:16:13 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:02.684 [2024-07-14 21:16:14.091834] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:02.684 [2024-07-14 21:16:14.091873] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:02.684 [2024-07-14 21:16:14.091883] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2309df034780 00:17:02.684 [2024-07-14 21:16:14.091890] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:02.684 [2024-07-14 21:16:14.092325] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:02.684 [2024-07-14 21:16:14.092349] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:02.684 [2024-07-14 21:16:14.092372] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:02.684 [2024-07-14 21:16:14.092382] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:02.684 pt1 00:17:02.684 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:02.684 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:02.684 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:02.684 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:02.684 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:02.684 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:02.684 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:02.684 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:02.684 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:02.684 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:02.684 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:02.684 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:02.942 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:02.942 "name": "raid_bdev1", 00:17:02.942 "uuid": "4ab408aa-4226-11ef-aa83-81fbc7dfef58", 00:17:02.942 "strip_size_kb": 0, 00:17:02.942 "state": "configuring", 00:17:02.942 "raid_level": "raid1", 00:17:02.942 "superblock": true, 00:17:02.942 "num_base_bdevs": 2, 00:17:02.942 "num_base_bdevs_discovered": 1, 00:17:02.942 "num_base_bdevs_operational": 2, 00:17:02.942 "base_bdevs_list": [ 00:17:02.942 { 00:17:02.942 "name": "pt1", 00:17:02.942 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:02.942 "is_configured": true, 00:17:02.942 "data_offset": 256, 00:17:02.942 "data_size": 7936 00:17:02.942 }, 00:17:02.942 { 00:17:02.942 "name": null, 00:17:02.942 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:02.942 "is_configured": false, 00:17:02.942 "data_offset": 256, 00:17:02.942 "data_size": 7936 00:17:02.942 } 00:17:02.942 ] 00:17:02.942 }' 00:17:02.942 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:02.942 21:16:14 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:03.200 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:03.200 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:03.200 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:03.200 21:16:14 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:03.458 [2024-07-14 21:16:14.991837] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:03.458 [2024-07-14 21:16:14.991868] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:03.458 [2024-07-14 21:16:14.991877] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2309df034f00 00:17:03.458 [2024-07-14 21:16:14.991884] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:03.458 [2024-07-14 21:16:14.991949] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:03.458 [2024-07-14 21:16:14.991961] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:03.458 [2024-07-14 21:16:14.991977] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:03.458 [2024-07-14 21:16:14.991984] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:03.458 [2024-07-14 21:16:14.992005] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2309df035180 00:17:03.458 [2024-07-14 21:16:14.992010] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:03.458 [2024-07-14 21:16:14.992026] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2309df097e20 00:17:03.458 [2024-07-14 21:16:14.992102] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2309df035180 00:17:03.458 [2024-07-14 21:16:14.992107] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2309df035180 00:17:03.458 [2024-07-14 21:16:14.992127] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:03.458 pt2 00:17:03.716 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:03.716 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:03.716 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:03.716 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:03.716 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:03.716 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:03.716 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:03.716 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:03.716 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:03.716 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:03.716 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:03.716 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:03.716 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:03.716 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:03.974 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:03.974 "name": "raid_bdev1", 00:17:03.974 "uuid": "4ab408aa-4226-11ef-aa83-81fbc7dfef58", 00:17:03.974 "strip_size_kb": 0, 00:17:03.974 "state": "online", 00:17:03.974 "raid_level": "raid1", 00:17:03.974 "superblock": true, 00:17:03.974 "num_base_bdevs": 2, 00:17:03.974 "num_base_bdevs_discovered": 2, 00:17:03.974 "num_base_bdevs_operational": 2, 00:17:03.974 "base_bdevs_list": [ 00:17:03.974 { 00:17:03.974 "name": "pt1", 00:17:03.974 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:03.974 "is_configured": true, 00:17:03.974 "data_offset": 256, 00:17:03.974 "data_size": 7936 00:17:03.974 }, 00:17:03.974 { 00:17:03.974 "name": "pt2", 00:17:03.974 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:03.974 "is_configured": true, 00:17:03.974 "data_offset": 256, 00:17:03.974 "data_size": 7936 00:17:03.974 } 00:17:03.974 ] 00:17:03.974 }' 00:17:03.974 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:03.974 21:16:15 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:04.231 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:04.231 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:04.231 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:04.231 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:04.231 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:04.231 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@198 -- # local name 00:17:04.231 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:04.231 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:04.488 [2024-07-14 21:16:15.887874] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:04.488 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:04.488 "name": "raid_bdev1", 00:17:04.488 "aliases": [ 00:17:04.488 "4ab408aa-4226-11ef-aa83-81fbc7dfef58" 00:17:04.488 ], 00:17:04.488 "product_name": "Raid Volume", 00:17:04.488 "block_size": 4096, 00:17:04.488 "num_blocks": 7936, 00:17:04.488 "uuid": "4ab408aa-4226-11ef-aa83-81fbc7dfef58", 00:17:04.488 "assigned_rate_limits": { 00:17:04.488 "rw_ios_per_sec": 0, 00:17:04.488 "rw_mbytes_per_sec": 0, 00:17:04.488 "r_mbytes_per_sec": 0, 00:17:04.488 "w_mbytes_per_sec": 0 00:17:04.488 }, 00:17:04.488 "claimed": false, 00:17:04.488 "zoned": false, 00:17:04.488 "supported_io_types": { 00:17:04.488 "read": true, 00:17:04.488 "write": true, 00:17:04.488 "unmap": false, 00:17:04.488 "flush": false, 00:17:04.488 "reset": true, 00:17:04.488 "nvme_admin": false, 00:17:04.488 "nvme_io": false, 00:17:04.488 "nvme_io_md": false, 00:17:04.488 "write_zeroes": true, 00:17:04.488 "zcopy": false, 00:17:04.488 "get_zone_info": false, 00:17:04.488 "zone_management": false, 00:17:04.488 "zone_append": false, 00:17:04.488 "compare": false, 00:17:04.488 "compare_and_write": false, 00:17:04.488 "abort": false, 00:17:04.488 "seek_hole": false, 00:17:04.488 "seek_data": false, 00:17:04.488 "copy": false, 00:17:04.488 "nvme_iov_md": false 00:17:04.488 }, 00:17:04.488 "memory_domains": [ 00:17:04.488 { 00:17:04.488 "dma_device_id": "system", 00:17:04.488 "dma_device_type": 1 00:17:04.488 }, 00:17:04.488 { 00:17:04.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.488 "dma_device_type": 2 00:17:04.488 }, 00:17:04.488 { 00:17:04.488 "dma_device_id": "system", 00:17:04.488 "dma_device_type": 1 00:17:04.488 }, 00:17:04.488 { 00:17:04.488 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.488 "dma_device_type": 2 00:17:04.488 } 00:17:04.488 ], 00:17:04.488 "driver_specific": { 00:17:04.488 "raid": { 00:17:04.488 "uuid": "4ab408aa-4226-11ef-aa83-81fbc7dfef58", 00:17:04.488 "strip_size_kb": 0, 00:17:04.488 "state": "online", 00:17:04.488 "raid_level": "raid1", 00:17:04.488 "superblock": true, 00:17:04.488 "num_base_bdevs": 2, 00:17:04.488 "num_base_bdevs_discovered": 2, 00:17:04.488 "num_base_bdevs_operational": 2, 00:17:04.488 "base_bdevs_list": [ 00:17:04.488 { 00:17:04.488 "name": "pt1", 00:17:04.488 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:04.488 "is_configured": true, 00:17:04.488 "data_offset": 256, 00:17:04.488 "data_size": 7936 00:17:04.488 }, 00:17:04.488 { 00:17:04.488 "name": "pt2", 00:17:04.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:04.488 "is_configured": true, 00:17:04.488 "data_offset": 256, 00:17:04.488 "data_size": 7936 00:17:04.488 } 00:17:04.488 ] 00:17:04.488 } 00:17:04.488 } 00:17:04.488 }' 00:17:04.488 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:04.488 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:04.488 pt2' 00:17:04.488 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:04.488 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:04.488 21:16:15 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:04.746 "name": "pt1", 00:17:04.746 "aliases": [ 00:17:04.746 "00000000-0000-0000-0000-000000000001" 00:17:04.746 ], 00:17:04.746 "product_name": "passthru", 00:17:04.746 "block_size": 4096, 00:17:04.746 "num_blocks": 8192, 00:17:04.746 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:04.746 "assigned_rate_limits": { 00:17:04.746 "rw_ios_per_sec": 0, 00:17:04.746 "rw_mbytes_per_sec": 0, 00:17:04.746 "r_mbytes_per_sec": 0, 00:17:04.746 "w_mbytes_per_sec": 0 00:17:04.746 }, 00:17:04.746 "claimed": true, 00:17:04.746 "claim_type": "exclusive_write", 00:17:04.746 "zoned": false, 00:17:04.746 "supported_io_types": { 00:17:04.746 "read": true, 00:17:04.746 "write": true, 00:17:04.746 "unmap": true, 00:17:04.746 "flush": true, 00:17:04.746 "reset": true, 00:17:04.746 "nvme_admin": false, 00:17:04.746 "nvme_io": false, 00:17:04.746 "nvme_io_md": false, 00:17:04.746 "write_zeroes": true, 00:17:04.746 "zcopy": true, 00:17:04.746 "get_zone_info": false, 00:17:04.746 "zone_management": false, 00:17:04.746 "zone_append": false, 00:17:04.746 "compare": false, 00:17:04.746 "compare_and_write": false, 00:17:04.746 "abort": true, 00:17:04.746 "seek_hole": false, 00:17:04.746 "seek_data": false, 00:17:04.746 "copy": true, 00:17:04.746 "nvme_iov_md": false 00:17:04.746 }, 00:17:04.746 "memory_domains": [ 00:17:04.746 { 00:17:04.746 "dma_device_id": "system", 00:17:04.746 "dma_device_type": 1 00:17:04.746 }, 00:17:04.746 { 00:17:04.746 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:04.746 "dma_device_type": 2 00:17:04.746 } 00:17:04.746 ], 00:17:04.746 "driver_specific": { 00:17:04.746 "passthru": { 00:17:04.746 "name": "pt1", 00:17:04.746 "base_bdev_name": "malloc1" 00:17:04.746 } 00:17:04.746 } 00:17:04.746 }' 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:04.746 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:05.004 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:05.004 "name": "pt2", 00:17:05.004 "aliases": [ 00:17:05.004 "00000000-0000-0000-0000-000000000002" 00:17:05.004 ], 00:17:05.004 "product_name": "passthru", 00:17:05.004 "block_size": 4096, 00:17:05.004 "num_blocks": 8192, 00:17:05.004 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:05.004 "assigned_rate_limits": { 00:17:05.004 "rw_ios_per_sec": 0, 00:17:05.004 "rw_mbytes_per_sec": 0, 00:17:05.004 "r_mbytes_per_sec": 0, 00:17:05.004 "w_mbytes_per_sec": 0 00:17:05.004 }, 00:17:05.004 "claimed": true, 00:17:05.004 "claim_type": "exclusive_write", 00:17:05.004 "zoned": false, 00:17:05.004 "supported_io_types": { 00:17:05.004 "read": true, 00:17:05.004 "write": true, 00:17:05.004 "unmap": true, 00:17:05.004 "flush": true, 00:17:05.004 "reset": true, 00:17:05.004 "nvme_admin": false, 00:17:05.004 "nvme_io": false, 00:17:05.004 "nvme_io_md": false, 00:17:05.004 "write_zeroes": true, 00:17:05.004 "zcopy": true, 00:17:05.004 "get_zone_info": false, 00:17:05.004 "zone_management": false, 00:17:05.004 "zone_append": false, 00:17:05.004 "compare": false, 00:17:05.004 "compare_and_write": false, 00:17:05.004 "abort": true, 00:17:05.004 "seek_hole": false, 00:17:05.004 "seek_data": false, 00:17:05.004 "copy": true, 00:17:05.004 "nvme_iov_md": false 00:17:05.004 }, 00:17:05.004 "memory_domains": [ 00:17:05.004 { 00:17:05.004 "dma_device_id": "system", 00:17:05.004 "dma_device_type": 1 00:17:05.004 }, 00:17:05.004 { 00:17:05.004 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:05.004 "dma_device_type": 2 00:17:05.004 } 00:17:05.004 ], 00:17:05.004 "driver_specific": { 00:17:05.004 "passthru": { 00:17:05.004 "name": "pt2", 00:17:05.004 "base_bdev_name": "malloc2" 00:17:05.004 } 00:17:05.004 } 00:17:05.004 }' 00:17:05.004 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.004 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:05.004 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:05.004 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.004 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:05.262 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@206 -- # [[ null == null ]] 00:17:05.262 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.262 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:05.262 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@207 -- # [[ null == null ]] 00:17:05.262 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.262 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:05.262 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@208 -- # [[ null == null ]] 00:17:05.262 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:05.262 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:05.520 [2024-07-14 21:16:16.859930] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:05.520 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@486 -- # '[' 4ab408aa-4226-11ef-aa83-81fbc7dfef58 '!=' 4ab408aa-4226-11ef-aa83-81fbc7dfef58 ']' 00:17:05.520 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:05.520 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:05.520 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@214 -- # return 0 00:17:05.520 21:16:16 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:05.777 [2024-07-14 21:16:17.127914] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:05.778 21:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:05.778 21:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:05.778 21:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:05.778 21:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:05.778 21:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:05.778 21:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:05.778 21:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:05.778 21:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:05.778 21:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:05.778 21:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:05.778 21:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:05.778 21:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:06.035 21:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:06.035 "name": "raid_bdev1", 00:17:06.035 "uuid": "4ab408aa-4226-11ef-aa83-81fbc7dfef58", 00:17:06.035 "strip_size_kb": 0, 00:17:06.035 "state": "online", 00:17:06.035 "raid_level": "raid1", 00:17:06.035 "superblock": true, 00:17:06.035 "num_base_bdevs": 2, 00:17:06.035 "num_base_bdevs_discovered": 1, 00:17:06.035 "num_base_bdevs_operational": 1, 00:17:06.035 "base_bdevs_list": [ 00:17:06.035 { 00:17:06.035 "name": null, 00:17:06.035 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:06.035 "is_configured": false, 00:17:06.035 "data_offset": 256, 00:17:06.035 "data_size": 7936 00:17:06.035 }, 00:17:06.035 { 00:17:06.035 "name": "pt2", 00:17:06.035 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:06.035 "is_configured": true, 00:17:06.035 "data_offset": 256, 00:17:06.035 "data_size": 7936 00:17:06.035 } 00:17:06.035 ] 00:17:06.035 }' 00:17:06.035 21:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:06.035 21:16:17 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:06.292 21:16:17 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:06.550 [2024-07-14 21:16:18.023932] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:06.550 [2024-07-14 21:16:18.023946] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:06.550 [2024-07-14 21:16:18.023957] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:06.550 [2024-07-14 21:16:18.023964] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:06.550 [2024-07-14 21:16:18.023968] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2309df035180 name raid_bdev1, state offline 00:17:06.550 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:06.550 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:06.807 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:06.807 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:06.807 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:06.807 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:06.807 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:07.064 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:07.064 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:07.064 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:07.064 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:07.064 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@518 -- # i=1 00:17:07.064 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:07.321 [2024-07-14 21:16:18.855942] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:07.321 [2024-07-14 21:16:18.855981] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:07.321 [2024-07-14 21:16:18.855991] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2309df034f00 00:17:07.321 [2024-07-14 21:16:18.855997] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:07.321 [2024-07-14 21:16:18.856465] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:07.321 [2024-07-14 21:16:18.856488] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:07.321 [2024-07-14 21:16:18.856505] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:07.321 [2024-07-14 21:16:18.856521] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:07.321 [2024-07-14 21:16:18.856539] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2309df035180 00:17:07.321 [2024-07-14 21:16:18.856543] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:07.321 [2024-07-14 21:16:18.856559] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2309df097e20 00:17:07.321 [2024-07-14 21:16:18.856603] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2309df035180 00:17:07.321 [2024-07-14 21:16:18.856608] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2309df035180 00:17:07.321 [2024-07-14 21:16:18.856626] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:07.321 pt2 00:17:07.578 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:07.578 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:07.578 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:07.578 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:07.578 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:07.578 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:07.578 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:07.578 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:07.578 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:07.578 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:07.578 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:07.578 21:16:18 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:07.837 21:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:07.837 "name": "raid_bdev1", 00:17:07.837 "uuid": "4ab408aa-4226-11ef-aa83-81fbc7dfef58", 00:17:07.837 "strip_size_kb": 0, 00:17:07.837 "state": "online", 00:17:07.837 "raid_level": "raid1", 00:17:07.837 "superblock": true, 00:17:07.837 "num_base_bdevs": 2, 00:17:07.837 "num_base_bdevs_discovered": 1, 00:17:07.837 "num_base_bdevs_operational": 1, 00:17:07.837 "base_bdevs_list": [ 00:17:07.837 { 00:17:07.837 "name": null, 00:17:07.837 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:07.837 "is_configured": false, 00:17:07.837 "data_offset": 256, 00:17:07.837 "data_size": 7936 00:17:07.837 }, 00:17:07.837 { 00:17:07.837 "name": "pt2", 00:17:07.837 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:07.837 "is_configured": true, 00:17:07.837 "data_offset": 256, 00:17:07.837 "data_size": 7936 00:17:07.837 } 00:17:07.837 ] 00:17:07.837 }' 00:17:07.837 21:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:07.837 21:16:19 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:08.096 21:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:08.355 [2024-07-14 21:16:19.771949] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.355 [2024-07-14 21:16:19.771964] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:08.355 [2024-07-14 21:16:19.771975] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:08.355 [2024-07-14 21:16:19.771982] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:08.355 [2024-07-14 21:16:19.771986] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2309df035180 name raid_bdev1, state offline 00:17:08.355 21:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:08.355 21:16:19 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.613 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:08.613 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:08.613 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:17:08.613 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:08.871 [2024-07-14 21:16:20.351971] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:08.871 [2024-07-14 21:16:20.352030] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:08.871 [2024-07-14 21:16:20.352042] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2309df034c80 00:17:08.871 [2024-07-14 21:16:20.352069] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:08.871 [2024-07-14 21:16:20.352791] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:08.871 [2024-07-14 21:16:20.352815] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:08.871 [2024-07-14 21:16:20.352837] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:08.871 [2024-07-14 21:16:20.352848] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:08.871 [2024-07-14 21:16:20.352878] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:08.871 [2024-07-14 21:16:20.352882] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:08.871 [2024-07-14 21:16:20.352886] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2309df034780 name raid_bdev1, state configuring 00:17:08.871 [2024-07-14 21:16:20.352893] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:08.871 [2024-07-14 21:16:20.352908] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x2309df034780 00:17:08.871 [2024-07-14 21:16:20.352912] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:08.871 [2024-07-14 21:16:20.352929] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x2309df097e20 00:17:08.871 [2024-07-14 21:16:20.352975] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x2309df034780 00:17:08.871 [2024-07-14 21:16:20.352981] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x2309df034780 00:17:08.871 [2024-07-14 21:16:20.353000] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:08.871 pt1 00:17:08.871 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:17:08.871 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:08.872 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:08.872 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:08.872 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:08.872 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:08.872 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:08.872 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:08.872 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:08.872 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:08.872 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:08.872 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:08.872 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:09.130 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:09.130 "name": "raid_bdev1", 00:17:09.130 "uuid": "4ab408aa-4226-11ef-aa83-81fbc7dfef58", 00:17:09.130 "strip_size_kb": 0, 00:17:09.130 "state": "online", 00:17:09.130 "raid_level": "raid1", 00:17:09.130 "superblock": true, 00:17:09.130 "num_base_bdevs": 2, 00:17:09.131 "num_base_bdevs_discovered": 1, 00:17:09.131 "num_base_bdevs_operational": 1, 00:17:09.131 "base_bdevs_list": [ 00:17:09.131 { 00:17:09.131 "name": null, 00:17:09.131 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:09.131 "is_configured": false, 00:17:09.131 "data_offset": 256, 00:17:09.131 "data_size": 7936 00:17:09.131 }, 00:17:09.131 { 00:17:09.131 "name": "pt2", 00:17:09.131 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:09.131 "is_configured": true, 00:17:09.131 "data_offset": 256, 00:17:09.131 "data_size": 7936 00:17:09.131 } 00:17:09.131 ] 00:17:09.131 }' 00:17:09.131 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:09.131 21:16:20 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:09.389 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:09.389 21:16:20 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:09.648 21:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:09.648 21:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:09.648 21:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:09.907 [2024-07-14 21:16:21.431990] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:09.907 21:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@557 -- # '[' 4ab408aa-4226-11ef-aa83-81fbc7dfef58 '!=' 4ab408aa-4226-11ef-aa83-81fbc7dfef58 ']' 00:17:09.907 21:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@562 -- # killprocess 65667 00:17:09.907 21:16:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@948 -- # '[' -z 65667 ']' 00:17:09.907 21:16:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@952 -- # kill -0 65667 00:17:09.907 21:16:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # uname 00:17:09.907 21:16:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:10.165 21:16:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # tail -1 00:17:10.165 21:16:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # ps -c -o command 65667 00:17:10.165 21:16:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:10.165 21:16:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:10.165 killing process with pid 65667 00:17:10.165 21:16:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@966 -- # echo 'killing process with pid 65667' 00:17:10.165 21:16:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@967 -- # kill 65667 00:17:10.165 [2024-07-14 21:16:21.460266] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:10.165 [2024-07-14 21:16:21.460282] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:10.165 [2024-07-14 21:16:21.460289] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:10.165 [2024-07-14 21:16:21.460292] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x2309df034780 name raid_bdev1, state offline 00:17:10.165 21:16:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@972 -- # wait 65667 00:17:10.165 [2024-07-14 21:16:21.476511] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:10.165 21:16:21 bdev_raid.raid_superblock_test_4k -- bdev/bdev_raid.sh@564 -- # return 0 00:17:10.165 00:17:10.165 real 0m13.723s 00:17:10.165 user 0m24.449s 00:17:10.165 sys 0m2.201s 00:17:10.165 21:16:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:10.165 21:16:21 bdev_raid.raid_superblock_test_4k -- common/autotest_common.sh@10 -- # set +x 00:17:10.165 ************************************ 00:17:10.165 END TEST raid_superblock_test_4k 00:17:10.165 ************************************ 00:17:10.436 21:16:21 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:10.436 21:16:21 bdev_raid -- bdev/bdev_raid.sh@900 -- # '[' '' = true ']' 00:17:10.436 21:16:21 bdev_raid -- bdev/bdev_raid.sh@904 -- # base_malloc_params='-m 32' 00:17:10.436 21:16:21 bdev_raid -- bdev/bdev_raid.sh@905 -- # run_test raid_state_function_test_sb_md_separate raid_state_function_test raid1 2 true 00:17:10.436 21:16:21 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:10.436 21:16:21 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:10.436 21:16:21 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:10.436 ************************************ 00:17:10.436 START TEST raid_state_function_test_sb_md_separate 00:17:10.436 ************************************ 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:10.436 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:10.437 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:10.437 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:10.437 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:10.437 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@244 -- # raid_pid=66058 00:17:10.437 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:10.437 Process raid pid: 66058 00:17:10.437 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66058' 00:17:10.437 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@246 -- # waitforlisten 66058 /var/tmp/spdk-raid.sock 00:17:10.437 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@829 -- # '[' -z 66058 ']' 00:17:10.437 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:10.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:10.437 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:10.437 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:10.437 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:10.437 21:16:21 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:10.437 [2024-07-14 21:16:21.754179] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:10.437 [2024-07-14 21:16:21.754350] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:11.019 EAL: TSC is not safe to use in SMP mode 00:17:11.019 EAL: TSC is not invariant 00:17:11.019 [2024-07-14 21:16:22.281763] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:11.019 [2024-07-14 21:16:22.377822] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:11.019 [2024-07-14 21:16:22.380271] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:11.019 [2024-07-14 21:16:22.381188] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.019 [2024-07-14 21:16:22.381198] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:11.277 21:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:11.277 21:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@862 -- # return 0 00:17:11.277 21:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:11.535 [2024-07-14 21:16:22.978081] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:11.535 [2024-07-14 21:16:22.978146] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:11.535 [2024-07-14 21:16:22.978165] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:11.535 [2024-07-14 21:16:22.978173] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:11.535 21:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:11.535 21:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:11.535 21:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:11.535 21:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:11.535 21:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:11.535 21:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:11.535 21:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:11.535 21:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:11.535 21:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:11.535 21:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:11.535 21:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:11.535 21:16:22 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:11.794 21:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:11.794 "name": "Existed_Raid", 00:17:11.794 "uuid": "525cd51b-4226-11ef-aa83-81fbc7dfef58", 00:17:11.794 "strip_size_kb": 0, 00:17:11.794 "state": "configuring", 00:17:11.794 "raid_level": "raid1", 00:17:11.794 "superblock": true, 00:17:11.794 "num_base_bdevs": 2, 00:17:11.794 "num_base_bdevs_discovered": 0, 00:17:11.794 "num_base_bdevs_operational": 2, 00:17:11.794 "base_bdevs_list": [ 00:17:11.794 { 00:17:11.794 "name": "BaseBdev1", 00:17:11.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.794 "is_configured": false, 00:17:11.794 "data_offset": 0, 00:17:11.794 "data_size": 0 00:17:11.794 }, 00:17:11.794 { 00:17:11.794 "name": "BaseBdev2", 00:17:11.794 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:11.794 "is_configured": false, 00:17:11.794 "data_offset": 0, 00:17:11.794 "data_size": 0 00:17:11.794 } 00:17:11.794 ] 00:17:11.794 }' 00:17:11.794 21:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:11.794 21:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:12.051 21:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:12.308 [2024-07-14 21:16:23.710081] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:12.308 [2024-07-14 21:16:23.710103] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x352fc6a34500 name Existed_Raid, state configuring 00:17:12.308 21:16:23 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:12.566 [2024-07-14 21:16:24.006101] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:12.566 [2024-07-14 21:16:24.006162] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:12.566 [2024-07-14 21:16:24.006183] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:12.566 [2024-07-14 21:16:24.006191] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:12.566 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev1 00:17:12.824 [2024-07-14 21:16:24.270961] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:12.824 BaseBdev1 00:17:12.824 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:12.824 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:12.824 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:12.824 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:17:12.824 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:12.824 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:12.824 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:13.082 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:13.340 [ 00:17:13.340 { 00:17:13.340 "name": "BaseBdev1", 00:17:13.340 "aliases": [ 00:17:13.340 "5321fb50-4226-11ef-aa83-81fbc7dfef58" 00:17:13.340 ], 00:17:13.340 "product_name": "Malloc disk", 00:17:13.340 "block_size": 4096, 00:17:13.340 "num_blocks": 8192, 00:17:13.340 "uuid": "5321fb50-4226-11ef-aa83-81fbc7dfef58", 00:17:13.340 "md_size": 32, 00:17:13.340 "md_interleave": false, 00:17:13.340 "dif_type": 0, 00:17:13.340 "assigned_rate_limits": { 00:17:13.340 "rw_ios_per_sec": 0, 00:17:13.340 "rw_mbytes_per_sec": 0, 00:17:13.340 "r_mbytes_per_sec": 0, 00:17:13.340 "w_mbytes_per_sec": 0 00:17:13.340 }, 00:17:13.340 "claimed": true, 00:17:13.340 "claim_type": "exclusive_write", 00:17:13.340 "zoned": false, 00:17:13.340 "supported_io_types": { 00:17:13.340 "read": true, 00:17:13.340 "write": true, 00:17:13.340 "unmap": true, 00:17:13.341 "flush": true, 00:17:13.341 "reset": true, 00:17:13.341 "nvme_admin": false, 00:17:13.341 "nvme_io": false, 00:17:13.341 "nvme_io_md": false, 00:17:13.341 "write_zeroes": true, 00:17:13.341 "zcopy": true, 00:17:13.341 "get_zone_info": false, 00:17:13.341 "zone_management": false, 00:17:13.341 "zone_append": false, 00:17:13.341 "compare": false, 00:17:13.341 "compare_and_write": false, 00:17:13.341 "abort": true, 00:17:13.341 "seek_hole": false, 00:17:13.341 "seek_data": false, 00:17:13.341 "copy": true, 00:17:13.341 "nvme_iov_md": false 00:17:13.341 }, 00:17:13.341 "memory_domains": [ 00:17:13.341 { 00:17:13.341 "dma_device_id": "system", 00:17:13.341 "dma_device_type": 1 00:17:13.341 }, 00:17:13.341 { 00:17:13.341 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:13.341 "dma_device_type": 2 00:17:13.341 } 00:17:13.341 ], 00:17:13.341 "driver_specific": {} 00:17:13.341 } 00:17:13.341 ] 00:17:13.341 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:17:13.341 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:13.341 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:13.341 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:13.341 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:13.341 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:13.341 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:13.341 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:13.341 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:13.341 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:13.341 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:13.341 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:13.341 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:13.598 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:13.598 "name": "Existed_Raid", 00:17:13.598 "uuid": "52f9b21a-4226-11ef-aa83-81fbc7dfef58", 00:17:13.598 "strip_size_kb": 0, 00:17:13.598 "state": "configuring", 00:17:13.598 "raid_level": "raid1", 00:17:13.598 "superblock": true, 00:17:13.598 "num_base_bdevs": 2, 00:17:13.598 "num_base_bdevs_discovered": 1, 00:17:13.598 "num_base_bdevs_operational": 2, 00:17:13.598 "base_bdevs_list": [ 00:17:13.598 { 00:17:13.598 "name": "BaseBdev1", 00:17:13.598 "uuid": "5321fb50-4226-11ef-aa83-81fbc7dfef58", 00:17:13.598 "is_configured": true, 00:17:13.598 "data_offset": 256, 00:17:13.598 "data_size": 7936 00:17:13.598 }, 00:17:13.598 { 00:17:13.598 "name": "BaseBdev2", 00:17:13.598 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:13.598 "is_configured": false, 00:17:13.598 "data_offset": 0, 00:17:13.598 "data_size": 0 00:17:13.598 } 00:17:13.598 ] 00:17:13.598 }' 00:17:13.598 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:13.598 21:16:24 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:13.856 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:14.113 [2024-07-14 21:16:25.466116] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:14.113 [2024-07-14 21:16:25.466140] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x352fc6a34500 name Existed_Raid, state configuring 00:17:14.113 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:14.372 [2024-07-14 21:16:25.730166] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:14.372 [2024-07-14 21:16:25.731025] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:14.372 [2024-07-14 21:16:25.731074] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:14.372 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:14.372 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:14.372 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:14.372 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:14.372 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:14.372 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:14.372 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:14.372 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:14.372 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:14.372 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:14.372 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:14.372 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:14.372 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:14.372 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:14.630 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:14.630 "name": "Existed_Raid", 00:17:14.630 "uuid": "5400c3b1-4226-11ef-aa83-81fbc7dfef58", 00:17:14.630 "strip_size_kb": 0, 00:17:14.630 "state": "configuring", 00:17:14.630 "raid_level": "raid1", 00:17:14.630 "superblock": true, 00:17:14.630 "num_base_bdevs": 2, 00:17:14.630 "num_base_bdevs_discovered": 1, 00:17:14.630 "num_base_bdevs_operational": 2, 00:17:14.630 "base_bdevs_list": [ 00:17:14.630 { 00:17:14.630 "name": "BaseBdev1", 00:17:14.630 "uuid": "5321fb50-4226-11ef-aa83-81fbc7dfef58", 00:17:14.630 "is_configured": true, 00:17:14.630 "data_offset": 256, 00:17:14.630 "data_size": 7936 00:17:14.630 }, 00:17:14.630 { 00:17:14.630 "name": "BaseBdev2", 00:17:14.631 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:14.631 "is_configured": false, 00:17:14.631 "data_offset": 0, 00:17:14.631 "data_size": 0 00:17:14.631 } 00:17:14.631 ] 00:17:14.631 }' 00:17:14.631 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:14.631 21:16:25 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:14.889 21:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b BaseBdev2 00:17:15.146 [2024-07-14 21:16:26.494273] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:15.146 [2024-07-14 21:16:26.494335] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x352fc6a34a00 00:17:15.146 [2024-07-14 21:16:26.494340] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:15.146 [2024-07-14 21:16:26.494358] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x352fc6a97e20 00:17:15.146 [2024-07-14 21:16:26.494385] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x352fc6a34a00 00:17:15.146 [2024-07-14 21:16:26.494402] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x352fc6a34a00 00:17:15.146 [2024-07-14 21:16:26.494417] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:15.146 BaseBdev2 00:17:15.146 21:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:15.146 21:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:15.146 21:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:15.146 21:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@899 -- # local i 00:17:15.146 21:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:15.146 21:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:15.146 21:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:15.404 21:16:26 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:15.663 [ 00:17:15.663 { 00:17:15.663 "name": "BaseBdev2", 00:17:15.663 "aliases": [ 00:17:15.663 "54755963-4226-11ef-aa83-81fbc7dfef58" 00:17:15.663 ], 00:17:15.663 "product_name": "Malloc disk", 00:17:15.663 "block_size": 4096, 00:17:15.663 "num_blocks": 8192, 00:17:15.663 "uuid": "54755963-4226-11ef-aa83-81fbc7dfef58", 00:17:15.663 "md_size": 32, 00:17:15.663 "md_interleave": false, 00:17:15.663 "dif_type": 0, 00:17:15.663 "assigned_rate_limits": { 00:17:15.663 "rw_ios_per_sec": 0, 00:17:15.663 "rw_mbytes_per_sec": 0, 00:17:15.663 "r_mbytes_per_sec": 0, 00:17:15.663 "w_mbytes_per_sec": 0 00:17:15.663 }, 00:17:15.663 "claimed": true, 00:17:15.663 "claim_type": "exclusive_write", 00:17:15.663 "zoned": false, 00:17:15.663 "supported_io_types": { 00:17:15.663 "read": true, 00:17:15.663 "write": true, 00:17:15.663 "unmap": true, 00:17:15.663 "flush": true, 00:17:15.663 "reset": true, 00:17:15.663 "nvme_admin": false, 00:17:15.663 "nvme_io": false, 00:17:15.663 "nvme_io_md": false, 00:17:15.663 "write_zeroes": true, 00:17:15.663 "zcopy": true, 00:17:15.663 "get_zone_info": false, 00:17:15.663 "zone_management": false, 00:17:15.663 "zone_append": false, 00:17:15.663 "compare": false, 00:17:15.663 "compare_and_write": false, 00:17:15.663 "abort": true, 00:17:15.663 "seek_hole": false, 00:17:15.663 "seek_data": false, 00:17:15.663 "copy": true, 00:17:15.663 "nvme_iov_md": false 00:17:15.663 }, 00:17:15.663 "memory_domains": [ 00:17:15.663 { 00:17:15.663 "dma_device_id": "system", 00:17:15.663 "dma_device_type": 1 00:17:15.663 }, 00:17:15.663 { 00:17:15.663 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:15.663 "dma_device_type": 2 00:17:15.663 } 00:17:15.663 ], 00:17:15.663 "driver_specific": {} 00:17:15.663 } 00:17:15.663 ] 00:17:15.663 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@905 -- # return 0 00:17:15.663 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:15.663 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:15.663 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:15.663 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:15.663 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:15.663 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:15.663 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:15.663 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:15.663 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:15.663 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:15.663 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:15.663 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:15.663 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:15.664 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:15.922 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:15.922 "name": "Existed_Raid", 00:17:15.922 "uuid": "5400c3b1-4226-11ef-aa83-81fbc7dfef58", 00:17:15.922 "strip_size_kb": 0, 00:17:15.922 "state": "online", 00:17:15.922 "raid_level": "raid1", 00:17:15.922 "superblock": true, 00:17:15.922 "num_base_bdevs": 2, 00:17:15.922 "num_base_bdevs_discovered": 2, 00:17:15.922 "num_base_bdevs_operational": 2, 00:17:15.922 "base_bdevs_list": [ 00:17:15.922 { 00:17:15.922 "name": "BaseBdev1", 00:17:15.922 "uuid": "5321fb50-4226-11ef-aa83-81fbc7dfef58", 00:17:15.922 "is_configured": true, 00:17:15.922 "data_offset": 256, 00:17:15.922 "data_size": 7936 00:17:15.922 }, 00:17:15.922 { 00:17:15.922 "name": "BaseBdev2", 00:17:15.922 "uuid": "54755963-4226-11ef-aa83-81fbc7dfef58", 00:17:15.922 "is_configured": true, 00:17:15.922 "data_offset": 256, 00:17:15.922 "data_size": 7936 00:17:15.922 } 00:17:15.922 ] 00:17:15.922 }' 00:17:15.922 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:15.922 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:16.180 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:16.180 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:16.180 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:16.180 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:16.180 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:16.180 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:17:16.180 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:16.180 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:16.438 [2024-07-14 21:16:27.754255] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:16.438 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:16.438 "name": "Existed_Raid", 00:17:16.438 "aliases": [ 00:17:16.438 "5400c3b1-4226-11ef-aa83-81fbc7dfef58" 00:17:16.438 ], 00:17:16.438 "product_name": "Raid Volume", 00:17:16.438 "block_size": 4096, 00:17:16.438 "num_blocks": 7936, 00:17:16.438 "uuid": "5400c3b1-4226-11ef-aa83-81fbc7dfef58", 00:17:16.438 "md_size": 32, 00:17:16.438 "md_interleave": false, 00:17:16.438 "dif_type": 0, 00:17:16.438 "assigned_rate_limits": { 00:17:16.438 "rw_ios_per_sec": 0, 00:17:16.438 "rw_mbytes_per_sec": 0, 00:17:16.438 "r_mbytes_per_sec": 0, 00:17:16.438 "w_mbytes_per_sec": 0 00:17:16.438 }, 00:17:16.438 "claimed": false, 00:17:16.438 "zoned": false, 00:17:16.438 "supported_io_types": { 00:17:16.438 "read": true, 00:17:16.438 "write": true, 00:17:16.438 "unmap": false, 00:17:16.438 "flush": false, 00:17:16.438 "reset": true, 00:17:16.438 "nvme_admin": false, 00:17:16.438 "nvme_io": false, 00:17:16.438 "nvme_io_md": false, 00:17:16.438 "write_zeroes": true, 00:17:16.438 "zcopy": false, 00:17:16.438 "get_zone_info": false, 00:17:16.438 "zone_management": false, 00:17:16.438 "zone_append": false, 00:17:16.438 "compare": false, 00:17:16.438 "compare_and_write": false, 00:17:16.438 "abort": false, 00:17:16.438 "seek_hole": false, 00:17:16.438 "seek_data": false, 00:17:16.438 "copy": false, 00:17:16.438 "nvme_iov_md": false 00:17:16.438 }, 00:17:16.438 "memory_domains": [ 00:17:16.438 { 00:17:16.438 "dma_device_id": "system", 00:17:16.438 "dma_device_type": 1 00:17:16.438 }, 00:17:16.438 { 00:17:16.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.438 "dma_device_type": 2 00:17:16.438 }, 00:17:16.438 { 00:17:16.438 "dma_device_id": "system", 00:17:16.438 "dma_device_type": 1 00:17:16.438 }, 00:17:16.438 { 00:17:16.438 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.438 "dma_device_type": 2 00:17:16.438 } 00:17:16.438 ], 00:17:16.438 "driver_specific": { 00:17:16.438 "raid": { 00:17:16.438 "uuid": "5400c3b1-4226-11ef-aa83-81fbc7dfef58", 00:17:16.438 "strip_size_kb": 0, 00:17:16.438 "state": "online", 00:17:16.438 "raid_level": "raid1", 00:17:16.438 "superblock": true, 00:17:16.438 "num_base_bdevs": 2, 00:17:16.438 "num_base_bdevs_discovered": 2, 00:17:16.438 "num_base_bdevs_operational": 2, 00:17:16.438 "base_bdevs_list": [ 00:17:16.438 { 00:17:16.438 "name": "BaseBdev1", 00:17:16.438 "uuid": "5321fb50-4226-11ef-aa83-81fbc7dfef58", 00:17:16.438 "is_configured": true, 00:17:16.438 "data_offset": 256, 00:17:16.438 "data_size": 7936 00:17:16.438 }, 00:17:16.438 { 00:17:16.438 "name": "BaseBdev2", 00:17:16.438 "uuid": "54755963-4226-11ef-aa83-81fbc7dfef58", 00:17:16.438 "is_configured": true, 00:17:16.438 "data_offset": 256, 00:17:16.438 "data_size": 7936 00:17:16.438 } 00:17:16.438 ] 00:17:16.438 } 00:17:16.438 } 00:17:16.438 }' 00:17:16.438 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:16.438 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:16.438 BaseBdev2' 00:17:16.438 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:16.438 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:16.438 21:16:27 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:16.696 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:16.696 "name": "BaseBdev1", 00:17:16.696 "aliases": [ 00:17:16.696 "5321fb50-4226-11ef-aa83-81fbc7dfef58" 00:17:16.696 ], 00:17:16.696 "product_name": "Malloc disk", 00:17:16.696 "block_size": 4096, 00:17:16.696 "num_blocks": 8192, 00:17:16.696 "uuid": "5321fb50-4226-11ef-aa83-81fbc7dfef58", 00:17:16.696 "md_size": 32, 00:17:16.696 "md_interleave": false, 00:17:16.696 "dif_type": 0, 00:17:16.696 "assigned_rate_limits": { 00:17:16.696 "rw_ios_per_sec": 0, 00:17:16.696 "rw_mbytes_per_sec": 0, 00:17:16.696 "r_mbytes_per_sec": 0, 00:17:16.696 "w_mbytes_per_sec": 0 00:17:16.696 }, 00:17:16.696 "claimed": true, 00:17:16.696 "claim_type": "exclusive_write", 00:17:16.696 "zoned": false, 00:17:16.696 "supported_io_types": { 00:17:16.696 "read": true, 00:17:16.696 "write": true, 00:17:16.696 "unmap": true, 00:17:16.696 "flush": true, 00:17:16.696 "reset": true, 00:17:16.696 "nvme_admin": false, 00:17:16.696 "nvme_io": false, 00:17:16.696 "nvme_io_md": false, 00:17:16.696 "write_zeroes": true, 00:17:16.696 "zcopy": true, 00:17:16.696 "get_zone_info": false, 00:17:16.696 "zone_management": false, 00:17:16.696 "zone_append": false, 00:17:16.696 "compare": false, 00:17:16.696 "compare_and_write": false, 00:17:16.697 "abort": true, 00:17:16.697 "seek_hole": false, 00:17:16.697 "seek_data": false, 00:17:16.697 "copy": true, 00:17:16.697 "nvme_iov_md": false 00:17:16.697 }, 00:17:16.697 "memory_domains": [ 00:17:16.697 { 00:17:16.697 "dma_device_id": "system", 00:17:16.697 "dma_device_type": 1 00:17:16.697 }, 00:17:16.697 { 00:17:16.697 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.697 "dma_device_type": 2 00:17:16.697 } 00:17:16.697 ], 00:17:16.697 "driver_specific": {} 00:17:16.697 }' 00:17:16.697 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.697 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.697 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:16.697 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.697 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.697 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:16.697 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.697 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.697 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:16.697 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.697 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.697 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:16.697 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:16.697 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:16.697 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:16.955 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:16.955 "name": "BaseBdev2", 00:17:16.955 "aliases": [ 00:17:16.955 "54755963-4226-11ef-aa83-81fbc7dfef58" 00:17:16.955 ], 00:17:16.955 "product_name": "Malloc disk", 00:17:16.955 "block_size": 4096, 00:17:16.955 "num_blocks": 8192, 00:17:16.955 "uuid": "54755963-4226-11ef-aa83-81fbc7dfef58", 00:17:16.955 "md_size": 32, 00:17:16.955 "md_interleave": false, 00:17:16.955 "dif_type": 0, 00:17:16.955 "assigned_rate_limits": { 00:17:16.955 "rw_ios_per_sec": 0, 00:17:16.955 "rw_mbytes_per_sec": 0, 00:17:16.955 "r_mbytes_per_sec": 0, 00:17:16.955 "w_mbytes_per_sec": 0 00:17:16.955 }, 00:17:16.955 "claimed": true, 00:17:16.955 "claim_type": "exclusive_write", 00:17:16.955 "zoned": false, 00:17:16.955 "supported_io_types": { 00:17:16.955 "read": true, 00:17:16.955 "write": true, 00:17:16.955 "unmap": true, 00:17:16.955 "flush": true, 00:17:16.955 "reset": true, 00:17:16.956 "nvme_admin": false, 00:17:16.956 "nvme_io": false, 00:17:16.956 "nvme_io_md": false, 00:17:16.956 "write_zeroes": true, 00:17:16.956 "zcopy": true, 00:17:16.956 "get_zone_info": false, 00:17:16.956 "zone_management": false, 00:17:16.956 "zone_append": false, 00:17:16.956 "compare": false, 00:17:16.956 "compare_and_write": false, 00:17:16.956 "abort": true, 00:17:16.956 "seek_hole": false, 00:17:16.956 "seek_data": false, 00:17:16.956 "copy": true, 00:17:16.956 "nvme_iov_md": false 00:17:16.956 }, 00:17:16.956 "memory_domains": [ 00:17:16.956 { 00:17:16.956 "dma_device_id": "system", 00:17:16.956 "dma_device_type": 1 00:17:16.956 }, 00:17:16.956 { 00:17:16.956 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:16.956 "dma_device_type": 2 00:17:16.956 } 00:17:16.956 ], 00:17:16.956 "driver_specific": {} 00:17:16.956 }' 00:17:16.956 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.956 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:16.956 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:16.956 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.956 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:16.956 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:16.956 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.956 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:16.956 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:16.956 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.956 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:16.956 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:16.956 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:17.213 [2024-07-14 21:16:28.602288] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:17.213 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:17.213 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:17.213 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:17.214 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:17:17.214 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:17.214 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:17.214 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:17.214 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:17.214 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:17.214 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:17.214 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:17.214 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:17.214 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:17.214 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:17.214 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:17.214 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:17.214 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:17.471 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:17.471 "name": "Existed_Raid", 00:17:17.471 "uuid": "5400c3b1-4226-11ef-aa83-81fbc7dfef58", 00:17:17.471 "strip_size_kb": 0, 00:17:17.471 "state": "online", 00:17:17.471 "raid_level": "raid1", 00:17:17.471 "superblock": true, 00:17:17.471 "num_base_bdevs": 2, 00:17:17.471 "num_base_bdevs_discovered": 1, 00:17:17.471 "num_base_bdevs_operational": 1, 00:17:17.471 "base_bdevs_list": [ 00:17:17.471 { 00:17:17.471 "name": null, 00:17:17.471 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:17.471 "is_configured": false, 00:17:17.471 "data_offset": 256, 00:17:17.471 "data_size": 7936 00:17:17.471 }, 00:17:17.471 { 00:17:17.471 "name": "BaseBdev2", 00:17:17.471 "uuid": "54755963-4226-11ef-aa83-81fbc7dfef58", 00:17:17.471 "is_configured": true, 00:17:17.471 "data_offset": 256, 00:17:17.471 "data_size": 7936 00:17:17.471 } 00:17:17.471 ] 00:17:17.471 }' 00:17:17.471 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:17.471 21:16:28 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.035 21:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:18.035 21:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:18.035 21:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:18.035 21:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.035 21:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:18.035 21:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:18.035 21:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:18.293 [2024-07-14 21:16:29.776656] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:18.293 [2024-07-14 21:16:29.776722] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:18.293 [2024-07-14 21:16:29.785851] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:18.293 [2024-07-14 21:16:29.785874] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:18.293 [2024-07-14 21:16:29.785878] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x352fc6a34a00 name Existed_Raid, state offline 00:17:18.293 21:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:18.293 21:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:18.293 21:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:18.293 21:16:29 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:18.551 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:18.551 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:18.551 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:18.551 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@341 -- # killprocess 66058 00:17:18.551 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@948 -- # '[' -z 66058 ']' 00:17:18.551 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@952 -- # kill -0 66058 00:17:18.551 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # uname 00:17:18.551 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:18.551 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # ps -c -o command 66058 00:17:18.551 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # tail -1 00:17:18.551 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:18.551 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:18.551 killing process with pid 66058 00:17:18.551 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66058' 00:17:18.551 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@967 -- # kill 66058 00:17:18.551 [2024-07-14 21:16:30.026498] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:18.551 [2024-07-14 21:16:30.026537] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:18.551 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@972 -- # wait 66058 00:17:18.810 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- bdev/bdev_raid.sh@343 -- # return 0 00:17:18.810 00:17:18.810 real 0m8.533s 00:17:18.810 user 0m14.749s 00:17:18.810 sys 0m1.508s 00:17:18.810 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:18.810 21:16:30 bdev_raid.raid_state_function_test_sb_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.810 ************************************ 00:17:18.810 END TEST raid_state_function_test_sb_md_separate 00:17:18.810 ************************************ 00:17:18.810 21:16:30 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:18.810 21:16:30 bdev_raid -- bdev/bdev_raid.sh@906 -- # run_test raid_superblock_test_md_separate raid_superblock_test raid1 2 00:17:18.810 21:16:30 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:18.810 21:16:30 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:18.810 21:16:30 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:18.810 ************************************ 00:17:18.810 START TEST raid_superblock_test_md_separate 00:17:18.810 ************************************ 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@411 -- # raid_pid=66328 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@412 -- # waitforlisten 66328 /var/tmp/spdk-raid.sock 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@829 -- # '[' -z 66328 ']' 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:18.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:18.810 21:16:30 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:18.810 [2024-07-14 21:16:30.342365] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:18.810 [2024-07-14 21:16:30.342590] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:19.378 EAL: TSC is not safe to use in SMP mode 00:17:19.378 EAL: TSC is not invariant 00:17:19.378 [2024-07-14 21:16:30.914571] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:19.636 [2024-07-14 21:16:31.007610] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:19.636 [2024-07-14 21:16:31.009817] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.636 [2024-07-14 21:16:31.010625] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.636 [2024-07-14 21:16:31.010640] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:19.895 21:16:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:19.895 21:16:31 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@862 -- # return 0 00:17:19.895 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:19.895 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:19.895 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:19.895 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:19.895 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:19.895 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:19.895 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:19.895 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:19.895 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc1 00:17:20.154 malloc1 00:17:20.154 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:20.421 [2024-07-14 21:16:31.883150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:20.421 [2024-07-14 21:16:31.883223] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.421 [2024-07-14 21:16:31.883251] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10cdbf634780 00:17:20.421 [2024-07-14 21:16:31.883260] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.421 [2024-07-14 21:16:31.884113] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.421 [2024-07-14 21:16:31.884138] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:20.421 pt1 00:17:20.421 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:20.421 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:20.421 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:20.421 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:20.421 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:20.421 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:20.421 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:20.421 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:20.421 21:16:31 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -b malloc2 00:17:20.679 malloc2 00:17:20.679 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:20.937 [2024-07-14 21:16:32.391143] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:20.937 [2024-07-14 21:16:32.391218] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:20.937 [2024-07-14 21:16:32.391242] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10cdbf634c80 00:17:20.937 [2024-07-14 21:16:32.391249] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:20.937 [2024-07-14 21:16:32.391705] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:20.937 [2024-07-14 21:16:32.391729] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:20.937 pt2 00:17:20.937 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:20.937 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:20.937 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:21.195 [2024-07-14 21:16:32.623147] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:21.196 [2024-07-14 21:16:32.623539] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:21.196 [2024-07-14 21:16:32.623615] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10cdbf634f00 00:17:21.196 [2024-07-14 21:16:32.623621] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:21.196 [2024-07-14 21:16:32.623662] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10cdbf697e20 00:17:21.196 [2024-07-14 21:16:32.623698] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10cdbf634f00 00:17:21.196 [2024-07-14 21:16:32.623701] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10cdbf634f00 00:17:21.196 [2024-07-14 21:16:32.623715] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:21.196 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:21.196 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:21.196 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:21.196 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:21.196 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:21.196 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:21.196 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:21.196 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:21.196 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:21.196 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:21.196 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:21.196 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:21.454 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:21.454 "name": "raid_bdev1", 00:17:21.454 "uuid": "581c8d7a-4226-11ef-aa83-81fbc7dfef58", 00:17:21.454 "strip_size_kb": 0, 00:17:21.454 "state": "online", 00:17:21.454 "raid_level": "raid1", 00:17:21.454 "superblock": true, 00:17:21.454 "num_base_bdevs": 2, 00:17:21.454 "num_base_bdevs_discovered": 2, 00:17:21.454 "num_base_bdevs_operational": 2, 00:17:21.454 "base_bdevs_list": [ 00:17:21.454 { 00:17:21.454 "name": "pt1", 00:17:21.454 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:21.454 "is_configured": true, 00:17:21.454 "data_offset": 256, 00:17:21.454 "data_size": 7936 00:17:21.454 }, 00:17:21.454 { 00:17:21.454 "name": "pt2", 00:17:21.454 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.454 "is_configured": true, 00:17:21.454 "data_offset": 256, 00:17:21.454 "data_size": 7936 00:17:21.454 } 00:17:21.454 ] 00:17:21.454 }' 00:17:21.454 21:16:32 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:21.454 21:16:32 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:21.712 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:21.712 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:21.712 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:21.712 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:21.712 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:21.712 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:17:21.712 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:21.712 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:21.969 [2024-07-14 21:16:33.455167] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:21.969 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:21.969 "name": "raid_bdev1", 00:17:21.969 "aliases": [ 00:17:21.969 "581c8d7a-4226-11ef-aa83-81fbc7dfef58" 00:17:21.969 ], 00:17:21.969 "product_name": "Raid Volume", 00:17:21.969 "block_size": 4096, 00:17:21.969 "num_blocks": 7936, 00:17:21.969 "uuid": "581c8d7a-4226-11ef-aa83-81fbc7dfef58", 00:17:21.969 "md_size": 32, 00:17:21.969 "md_interleave": false, 00:17:21.969 "dif_type": 0, 00:17:21.969 "assigned_rate_limits": { 00:17:21.969 "rw_ios_per_sec": 0, 00:17:21.969 "rw_mbytes_per_sec": 0, 00:17:21.969 "r_mbytes_per_sec": 0, 00:17:21.969 "w_mbytes_per_sec": 0 00:17:21.969 }, 00:17:21.969 "claimed": false, 00:17:21.969 "zoned": false, 00:17:21.969 "supported_io_types": { 00:17:21.969 "read": true, 00:17:21.969 "write": true, 00:17:21.969 "unmap": false, 00:17:21.969 "flush": false, 00:17:21.969 "reset": true, 00:17:21.969 "nvme_admin": false, 00:17:21.969 "nvme_io": false, 00:17:21.969 "nvme_io_md": false, 00:17:21.969 "write_zeroes": true, 00:17:21.969 "zcopy": false, 00:17:21.969 "get_zone_info": false, 00:17:21.969 "zone_management": false, 00:17:21.969 "zone_append": false, 00:17:21.969 "compare": false, 00:17:21.969 "compare_and_write": false, 00:17:21.969 "abort": false, 00:17:21.969 "seek_hole": false, 00:17:21.969 "seek_data": false, 00:17:21.969 "copy": false, 00:17:21.969 "nvme_iov_md": false 00:17:21.969 }, 00:17:21.969 "memory_domains": [ 00:17:21.969 { 00:17:21.969 "dma_device_id": "system", 00:17:21.969 "dma_device_type": 1 00:17:21.969 }, 00:17:21.969 { 00:17:21.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.969 "dma_device_type": 2 00:17:21.969 }, 00:17:21.969 { 00:17:21.969 "dma_device_id": "system", 00:17:21.969 "dma_device_type": 1 00:17:21.969 }, 00:17:21.969 { 00:17:21.969 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:21.969 "dma_device_type": 2 00:17:21.969 } 00:17:21.969 ], 00:17:21.969 "driver_specific": { 00:17:21.969 "raid": { 00:17:21.969 "uuid": "581c8d7a-4226-11ef-aa83-81fbc7dfef58", 00:17:21.969 "strip_size_kb": 0, 00:17:21.969 "state": "online", 00:17:21.969 "raid_level": "raid1", 00:17:21.969 "superblock": true, 00:17:21.969 "num_base_bdevs": 2, 00:17:21.969 "num_base_bdevs_discovered": 2, 00:17:21.969 "num_base_bdevs_operational": 2, 00:17:21.969 "base_bdevs_list": [ 00:17:21.969 { 00:17:21.969 "name": "pt1", 00:17:21.969 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:21.969 "is_configured": true, 00:17:21.969 "data_offset": 256, 00:17:21.969 "data_size": 7936 00:17:21.969 }, 00:17:21.969 { 00:17:21.969 "name": "pt2", 00:17:21.969 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:21.969 "is_configured": true, 00:17:21.969 "data_offset": 256, 00:17:21.969 "data_size": 7936 00:17:21.969 } 00:17:21.969 ] 00:17:21.969 } 00:17:21.969 } 00:17:21.969 }' 00:17:21.969 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:21.969 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:21.969 pt2' 00:17:21.969 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:21.969 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:21.969 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:22.227 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:22.227 "name": "pt1", 00:17:22.227 "aliases": [ 00:17:22.227 "00000000-0000-0000-0000-000000000001" 00:17:22.227 ], 00:17:22.227 "product_name": "passthru", 00:17:22.227 "block_size": 4096, 00:17:22.227 "num_blocks": 8192, 00:17:22.227 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:22.227 "md_size": 32, 00:17:22.227 "md_interleave": false, 00:17:22.227 "dif_type": 0, 00:17:22.227 "assigned_rate_limits": { 00:17:22.227 "rw_ios_per_sec": 0, 00:17:22.227 "rw_mbytes_per_sec": 0, 00:17:22.227 "r_mbytes_per_sec": 0, 00:17:22.227 "w_mbytes_per_sec": 0 00:17:22.227 }, 00:17:22.227 "claimed": true, 00:17:22.227 "claim_type": "exclusive_write", 00:17:22.227 "zoned": false, 00:17:22.227 "supported_io_types": { 00:17:22.227 "read": true, 00:17:22.227 "write": true, 00:17:22.227 "unmap": true, 00:17:22.227 "flush": true, 00:17:22.227 "reset": true, 00:17:22.227 "nvme_admin": false, 00:17:22.227 "nvme_io": false, 00:17:22.227 "nvme_io_md": false, 00:17:22.227 "write_zeroes": true, 00:17:22.227 "zcopy": true, 00:17:22.227 "get_zone_info": false, 00:17:22.227 "zone_management": false, 00:17:22.227 "zone_append": false, 00:17:22.227 "compare": false, 00:17:22.227 "compare_and_write": false, 00:17:22.227 "abort": true, 00:17:22.227 "seek_hole": false, 00:17:22.227 "seek_data": false, 00:17:22.227 "copy": true, 00:17:22.227 "nvme_iov_md": false 00:17:22.227 }, 00:17:22.227 "memory_domains": [ 00:17:22.227 { 00:17:22.227 "dma_device_id": "system", 00:17:22.227 "dma_device_type": 1 00:17:22.227 }, 00:17:22.227 { 00:17:22.227 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.227 "dma_device_type": 2 00:17:22.227 } 00:17:22.227 ], 00:17:22.227 "driver_specific": { 00:17:22.227 "passthru": { 00:17:22.227 "name": "pt1", 00:17:22.227 "base_bdev_name": "malloc1" 00:17:22.227 } 00:17:22.227 } 00:17:22.227 }' 00:17:22.227 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.227 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.227 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:22.227 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.227 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.227 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:22.227 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.227 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.227 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:22.227 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.227 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.484 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:22.484 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:22.484 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:22.484 21:16:33 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:22.742 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:22.742 "name": "pt2", 00:17:22.742 "aliases": [ 00:17:22.742 "00000000-0000-0000-0000-000000000002" 00:17:22.742 ], 00:17:22.742 "product_name": "passthru", 00:17:22.742 "block_size": 4096, 00:17:22.742 "num_blocks": 8192, 00:17:22.742 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:22.742 "md_size": 32, 00:17:22.742 "md_interleave": false, 00:17:22.742 "dif_type": 0, 00:17:22.742 "assigned_rate_limits": { 00:17:22.742 "rw_ios_per_sec": 0, 00:17:22.742 "rw_mbytes_per_sec": 0, 00:17:22.742 "r_mbytes_per_sec": 0, 00:17:22.742 "w_mbytes_per_sec": 0 00:17:22.742 }, 00:17:22.742 "claimed": true, 00:17:22.742 "claim_type": "exclusive_write", 00:17:22.742 "zoned": false, 00:17:22.742 "supported_io_types": { 00:17:22.742 "read": true, 00:17:22.742 "write": true, 00:17:22.742 "unmap": true, 00:17:22.742 "flush": true, 00:17:22.742 "reset": true, 00:17:22.742 "nvme_admin": false, 00:17:22.742 "nvme_io": false, 00:17:22.742 "nvme_io_md": false, 00:17:22.742 "write_zeroes": true, 00:17:22.742 "zcopy": true, 00:17:22.742 "get_zone_info": false, 00:17:22.742 "zone_management": false, 00:17:22.742 "zone_append": false, 00:17:22.742 "compare": false, 00:17:22.742 "compare_and_write": false, 00:17:22.742 "abort": true, 00:17:22.742 "seek_hole": false, 00:17:22.742 "seek_data": false, 00:17:22.742 "copy": true, 00:17:22.742 "nvme_iov_md": false 00:17:22.742 }, 00:17:22.742 "memory_domains": [ 00:17:22.742 { 00:17:22.742 "dma_device_id": "system", 00:17:22.742 "dma_device_type": 1 00:17:22.742 }, 00:17:22.742 { 00:17:22.742 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:22.742 "dma_device_type": 2 00:17:22.742 } 00:17:22.742 ], 00:17:22.742 "driver_specific": { 00:17:22.742 "passthru": { 00:17:22.742 "name": "pt2", 00:17:22.742 "base_bdev_name": "malloc2" 00:17:22.742 } 00:17:22.742 } 00:17:22.742 }' 00:17:22.742 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.742 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:22.742 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:22.742 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.742 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:22.742 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:22.742 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.742 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:22.742 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:22.742 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.742 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:22.742 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:22.742 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:22.742 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:23.000 [2024-07-14 21:16:34.383247] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:23.000 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=581c8d7a-4226-11ef-aa83-81fbc7dfef58 00:17:23.000 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@435 -- # '[' -z 581c8d7a-4226-11ef-aa83-81fbc7dfef58 ']' 00:17:23.000 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:23.258 [2024-07-14 21:16:34.631172] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:23.258 [2024-07-14 21:16:34.631190] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:23.258 [2024-07-14 21:16:34.631226] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:23.258 [2024-07-14 21:16:34.631242] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:23.258 [2024-07-14 21:16:34.631246] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10cdbf634f00 name raid_bdev1, state offline 00:17:23.258 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:23.258 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:23.516 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:23.516 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:23.516 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:23.516 21:16:34 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:23.516 21:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:23.516 21:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:23.774 21:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:23.774 21:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:24.031 21:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:24.031 21:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:24.031 21:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@648 -- # local es=0 00:17:24.031 21:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:24.031 21:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.031 21:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.031 21:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.031 21:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.031 21:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.031 21:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:24.031 21:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:24.031 21:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:24.031 21:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:24.289 [2024-07-14 21:16:35.811248] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:24.289 [2024-07-14 21:16:35.811957] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:24.289 [2024-07-14 21:16:35.811997] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:24.289 [2024-07-14 21:16:35.812034] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:24.289 [2024-07-14 21:16:35.812044] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:24.289 [2024-07-14 21:16:35.812049] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10cdbf634c80 name raid_bdev1, state configuring 00:17:24.289 request: 00:17:24.289 { 00:17:24.289 "name": "raid_bdev1", 00:17:24.289 "raid_level": "raid1", 00:17:24.289 "base_bdevs": [ 00:17:24.289 "malloc1", 00:17:24.289 "malloc2" 00:17:24.289 ], 00:17:24.289 "superblock": false, 00:17:24.289 "method": "bdev_raid_create", 00:17:24.289 "req_id": 1 00:17:24.289 } 00:17:24.289 Got JSON-RPC error response 00:17:24.289 response: 00:17:24.289 { 00:17:24.289 "code": -17, 00:17:24.289 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:24.289 } 00:17:24.289 21:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@651 -- # es=1 00:17:24.289 21:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:24.289 21:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:24.289 21:16:35 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:24.289 21:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.289 21:16:35 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:24.547 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:24.547 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:24.547 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:24.805 [2024-07-14 21:16:36.255281] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:24.805 [2024-07-14 21:16:36.255351] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:24.805 [2024-07-14 21:16:36.255362] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10cdbf634780 00:17:24.805 [2024-07-14 21:16:36.255368] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:24.805 [2024-07-14 21:16:36.256149] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:24.805 [2024-07-14 21:16:36.256186] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:24.805 [2024-07-14 21:16:36.256220] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:24.805 [2024-07-14 21:16:36.256232] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:24.805 pt1 00:17:24.805 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:24.805 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:24.805 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:24.805 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:24.805 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:24.805 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:24.805 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:24.805 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:24.805 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:24.805 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:24.805 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:24.805 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.063 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.063 "name": "raid_bdev1", 00:17:25.063 "uuid": "581c8d7a-4226-11ef-aa83-81fbc7dfef58", 00:17:25.063 "strip_size_kb": 0, 00:17:25.063 "state": "configuring", 00:17:25.063 "raid_level": "raid1", 00:17:25.063 "superblock": true, 00:17:25.063 "num_base_bdevs": 2, 00:17:25.063 "num_base_bdevs_discovered": 1, 00:17:25.063 "num_base_bdevs_operational": 2, 00:17:25.063 "base_bdevs_list": [ 00:17:25.063 { 00:17:25.063 "name": "pt1", 00:17:25.063 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:25.063 "is_configured": true, 00:17:25.063 "data_offset": 256, 00:17:25.063 "data_size": 7936 00:17:25.063 }, 00:17:25.063 { 00:17:25.063 "name": null, 00:17:25.063 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.063 "is_configured": false, 00:17:25.063 "data_offset": 256, 00:17:25.063 "data_size": 7936 00:17:25.063 } 00:17:25.063 ] 00:17:25.063 }' 00:17:25.063 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.064 21:16:36 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:25.322 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:25.322 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:25.322 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:25.322 21:16:36 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:25.580 [2024-07-14 21:16:37.039284] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:25.580 [2024-07-14 21:16:37.039352] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:25.580 [2024-07-14 21:16:37.039364] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10cdbf634f00 00:17:25.580 [2024-07-14 21:16:37.039371] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:25.580 [2024-07-14 21:16:37.039459] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:25.580 [2024-07-14 21:16:37.039468] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:25.580 [2024-07-14 21:16:37.039494] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:25.580 [2024-07-14 21:16:37.039512] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:25.580 [2024-07-14 21:16:37.039548] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10cdbf635180 00:17:25.580 [2024-07-14 21:16:37.039551] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:25.580 [2024-07-14 21:16:37.039568] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10cdbf697e20 00:17:25.580 [2024-07-14 21:16:37.039591] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10cdbf635180 00:17:25.580 [2024-07-14 21:16:37.039610] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10cdbf635180 00:17:25.580 [2024-07-14 21:16:37.039626] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:25.580 pt2 00:17:25.580 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:25.580 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:25.580 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:25.580 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:25.580 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:25.580 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:25.580 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:25.580 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:25.581 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:25.581 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:25.581 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:25.581 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:25.581 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:25.581 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:25.838 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:25.838 "name": "raid_bdev1", 00:17:25.838 "uuid": "581c8d7a-4226-11ef-aa83-81fbc7dfef58", 00:17:25.838 "strip_size_kb": 0, 00:17:25.838 "state": "online", 00:17:25.838 "raid_level": "raid1", 00:17:25.838 "superblock": true, 00:17:25.838 "num_base_bdevs": 2, 00:17:25.838 "num_base_bdevs_discovered": 2, 00:17:25.838 "num_base_bdevs_operational": 2, 00:17:25.838 "base_bdevs_list": [ 00:17:25.838 { 00:17:25.838 "name": "pt1", 00:17:25.838 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:25.838 "is_configured": true, 00:17:25.838 "data_offset": 256, 00:17:25.838 "data_size": 7936 00:17:25.838 }, 00:17:25.838 { 00:17:25.838 "name": "pt2", 00:17:25.838 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:25.838 "is_configured": true, 00:17:25.838 "data_offset": 256, 00:17:25.838 "data_size": 7936 00:17:25.838 } 00:17:25.839 ] 00:17:25.839 }' 00:17:25.839 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:25.839 21:16:37 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:26.096 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:26.096 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:26.096 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:26.096 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:26.096 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:26.096 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@198 -- # local name 00:17:26.096 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:26.096 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:26.355 [2024-07-14 21:16:37.851297] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:26.355 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:26.355 "name": "raid_bdev1", 00:17:26.355 "aliases": [ 00:17:26.355 "581c8d7a-4226-11ef-aa83-81fbc7dfef58" 00:17:26.355 ], 00:17:26.355 "product_name": "Raid Volume", 00:17:26.355 "block_size": 4096, 00:17:26.355 "num_blocks": 7936, 00:17:26.355 "uuid": "581c8d7a-4226-11ef-aa83-81fbc7dfef58", 00:17:26.355 "md_size": 32, 00:17:26.355 "md_interleave": false, 00:17:26.355 "dif_type": 0, 00:17:26.355 "assigned_rate_limits": { 00:17:26.355 "rw_ios_per_sec": 0, 00:17:26.355 "rw_mbytes_per_sec": 0, 00:17:26.355 "r_mbytes_per_sec": 0, 00:17:26.355 "w_mbytes_per_sec": 0 00:17:26.355 }, 00:17:26.355 "claimed": false, 00:17:26.355 "zoned": false, 00:17:26.355 "supported_io_types": { 00:17:26.355 "read": true, 00:17:26.355 "write": true, 00:17:26.355 "unmap": false, 00:17:26.355 "flush": false, 00:17:26.355 "reset": true, 00:17:26.355 "nvme_admin": false, 00:17:26.355 "nvme_io": false, 00:17:26.355 "nvme_io_md": false, 00:17:26.355 "write_zeroes": true, 00:17:26.355 "zcopy": false, 00:17:26.355 "get_zone_info": false, 00:17:26.355 "zone_management": false, 00:17:26.355 "zone_append": false, 00:17:26.355 "compare": false, 00:17:26.355 "compare_and_write": false, 00:17:26.355 "abort": false, 00:17:26.355 "seek_hole": false, 00:17:26.355 "seek_data": false, 00:17:26.355 "copy": false, 00:17:26.355 "nvme_iov_md": false 00:17:26.355 }, 00:17:26.355 "memory_domains": [ 00:17:26.355 { 00:17:26.355 "dma_device_id": "system", 00:17:26.355 "dma_device_type": 1 00:17:26.355 }, 00:17:26.355 { 00:17:26.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.355 "dma_device_type": 2 00:17:26.355 }, 00:17:26.355 { 00:17:26.355 "dma_device_id": "system", 00:17:26.355 "dma_device_type": 1 00:17:26.355 }, 00:17:26.355 { 00:17:26.355 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.355 "dma_device_type": 2 00:17:26.355 } 00:17:26.355 ], 00:17:26.355 "driver_specific": { 00:17:26.355 "raid": { 00:17:26.355 "uuid": "581c8d7a-4226-11ef-aa83-81fbc7dfef58", 00:17:26.355 "strip_size_kb": 0, 00:17:26.355 "state": "online", 00:17:26.355 "raid_level": "raid1", 00:17:26.355 "superblock": true, 00:17:26.355 "num_base_bdevs": 2, 00:17:26.355 "num_base_bdevs_discovered": 2, 00:17:26.355 "num_base_bdevs_operational": 2, 00:17:26.355 "base_bdevs_list": [ 00:17:26.355 { 00:17:26.355 "name": "pt1", 00:17:26.355 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:26.355 "is_configured": true, 00:17:26.355 "data_offset": 256, 00:17:26.355 "data_size": 7936 00:17:26.355 }, 00:17:26.355 { 00:17:26.355 "name": "pt2", 00:17:26.355 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:26.355 "is_configured": true, 00:17:26.355 "data_offset": 256, 00:17:26.355 "data_size": 7936 00:17:26.355 } 00:17:26.355 ] 00:17:26.355 } 00:17:26.355 } 00:17:26.355 }' 00:17:26.355 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:26.355 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:26.355 pt2' 00:17:26.355 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:26.355 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:26.355 21:16:37 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:26.613 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:26.613 "name": "pt1", 00:17:26.613 "aliases": [ 00:17:26.613 "00000000-0000-0000-0000-000000000001" 00:17:26.613 ], 00:17:26.613 "product_name": "passthru", 00:17:26.613 "block_size": 4096, 00:17:26.613 "num_blocks": 8192, 00:17:26.613 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:26.613 "md_size": 32, 00:17:26.613 "md_interleave": false, 00:17:26.613 "dif_type": 0, 00:17:26.613 "assigned_rate_limits": { 00:17:26.613 "rw_ios_per_sec": 0, 00:17:26.613 "rw_mbytes_per_sec": 0, 00:17:26.613 "r_mbytes_per_sec": 0, 00:17:26.613 "w_mbytes_per_sec": 0 00:17:26.613 }, 00:17:26.613 "claimed": true, 00:17:26.613 "claim_type": "exclusive_write", 00:17:26.613 "zoned": false, 00:17:26.613 "supported_io_types": { 00:17:26.613 "read": true, 00:17:26.613 "write": true, 00:17:26.613 "unmap": true, 00:17:26.613 "flush": true, 00:17:26.613 "reset": true, 00:17:26.613 "nvme_admin": false, 00:17:26.613 "nvme_io": false, 00:17:26.613 "nvme_io_md": false, 00:17:26.613 "write_zeroes": true, 00:17:26.613 "zcopy": true, 00:17:26.613 "get_zone_info": false, 00:17:26.613 "zone_management": false, 00:17:26.613 "zone_append": false, 00:17:26.613 "compare": false, 00:17:26.613 "compare_and_write": false, 00:17:26.613 "abort": true, 00:17:26.613 "seek_hole": false, 00:17:26.613 "seek_data": false, 00:17:26.613 "copy": true, 00:17:26.613 "nvme_iov_md": false 00:17:26.613 }, 00:17:26.613 "memory_domains": [ 00:17:26.613 { 00:17:26.613 "dma_device_id": "system", 00:17:26.613 "dma_device_type": 1 00:17:26.613 }, 00:17:26.613 { 00:17:26.613 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:26.613 "dma_device_type": 2 00:17:26.613 } 00:17:26.613 ], 00:17:26.613 "driver_specific": { 00:17:26.613 "passthru": { 00:17:26.613 "name": "pt1", 00:17:26.613 "base_bdev_name": "malloc1" 00:17:26.613 } 00:17:26.613 } 00:17:26.613 }' 00:17:26.613 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.613 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:26.613 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:26.613 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:26.871 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:26.871 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:26.871 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:26.871 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:26.871 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:26.871 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:26.871 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:26.871 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:26.871 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:26.871 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:26.871 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:27.129 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:27.129 "name": "pt2", 00:17:27.129 "aliases": [ 00:17:27.129 "00000000-0000-0000-0000-000000000002" 00:17:27.129 ], 00:17:27.129 "product_name": "passthru", 00:17:27.129 "block_size": 4096, 00:17:27.129 "num_blocks": 8192, 00:17:27.129 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:27.129 "md_size": 32, 00:17:27.129 "md_interleave": false, 00:17:27.129 "dif_type": 0, 00:17:27.129 "assigned_rate_limits": { 00:17:27.129 "rw_ios_per_sec": 0, 00:17:27.129 "rw_mbytes_per_sec": 0, 00:17:27.129 "r_mbytes_per_sec": 0, 00:17:27.129 "w_mbytes_per_sec": 0 00:17:27.129 }, 00:17:27.129 "claimed": true, 00:17:27.129 "claim_type": "exclusive_write", 00:17:27.129 "zoned": false, 00:17:27.129 "supported_io_types": { 00:17:27.129 "read": true, 00:17:27.129 "write": true, 00:17:27.129 "unmap": true, 00:17:27.129 "flush": true, 00:17:27.129 "reset": true, 00:17:27.129 "nvme_admin": false, 00:17:27.129 "nvme_io": false, 00:17:27.129 "nvme_io_md": false, 00:17:27.129 "write_zeroes": true, 00:17:27.129 "zcopy": true, 00:17:27.129 "get_zone_info": false, 00:17:27.129 "zone_management": false, 00:17:27.129 "zone_append": false, 00:17:27.130 "compare": false, 00:17:27.130 "compare_and_write": false, 00:17:27.130 "abort": true, 00:17:27.130 "seek_hole": false, 00:17:27.130 "seek_data": false, 00:17:27.130 "copy": true, 00:17:27.130 "nvme_iov_md": false 00:17:27.130 }, 00:17:27.130 "memory_domains": [ 00:17:27.130 { 00:17:27.130 "dma_device_id": "system", 00:17:27.130 "dma_device_type": 1 00:17:27.130 }, 00:17:27.130 { 00:17:27.130 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:27.130 "dma_device_type": 2 00:17:27.130 } 00:17:27.130 ], 00:17:27.130 "driver_specific": { 00:17:27.130 "passthru": { 00:17:27.130 "name": "pt2", 00:17:27.130 "base_bdev_name": "malloc2" 00:17:27.130 } 00:17:27.130 } 00:17:27.130 }' 00:17:27.130 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.130 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:27.130 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@205 -- # [[ 4096 == 4096 ]] 00:17:27.130 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.130 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:27.130 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:27.130 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.130 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:27.130 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@207 -- # [[ false == false ]] 00:17:27.130 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.130 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:27.130 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:27.130 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:27.130 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:27.388 [2024-07-14 21:16:38.747315] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:27.388 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@486 -- # '[' 581c8d7a-4226-11ef-aa83-81fbc7dfef58 '!=' 581c8d7a-4226-11ef-aa83-81fbc7dfef58 ']' 00:17:27.388 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:27.388 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:27.388 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@214 -- # return 0 00:17:27.388 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:27.646 [2024-07-14 21:16:38.935335] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:27.646 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:27.646 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:27.646 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:27.646 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:27.646 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:27.646 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:27.646 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:27.646 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:27.646 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:27.646 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:27.646 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:27.646 21:16:38 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:27.905 21:16:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:27.905 "name": "raid_bdev1", 00:17:27.905 "uuid": "581c8d7a-4226-11ef-aa83-81fbc7dfef58", 00:17:27.905 "strip_size_kb": 0, 00:17:27.905 "state": "online", 00:17:27.905 "raid_level": "raid1", 00:17:27.905 "superblock": true, 00:17:27.905 "num_base_bdevs": 2, 00:17:27.905 "num_base_bdevs_discovered": 1, 00:17:27.905 "num_base_bdevs_operational": 1, 00:17:27.905 "base_bdevs_list": [ 00:17:27.905 { 00:17:27.905 "name": null, 00:17:27.905 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:27.905 "is_configured": false, 00:17:27.905 "data_offset": 256, 00:17:27.905 "data_size": 7936 00:17:27.905 }, 00:17:27.905 { 00:17:27.905 "name": "pt2", 00:17:27.905 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:27.905 "is_configured": true, 00:17:27.905 "data_offset": 256, 00:17:27.905 "data_size": 7936 00:17:27.905 } 00:17:27.905 ] 00:17:27.905 }' 00:17:27.905 21:16:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:27.905 21:16:39 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:28.163 21:16:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:28.422 [2024-07-14 21:16:39.735289] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:28.422 [2024-07-14 21:16:39.735306] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:28.422 [2024-07-14 21:16:39.735346] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:28.422 [2024-07-14 21:16:39.735357] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:28.422 [2024-07-14 21:16:39.735361] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10cdbf635180 name raid_bdev1, state offline 00:17:28.422 21:16:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:28.422 21:16:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.422 21:16:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:28.422 21:16:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:28.422 21:16:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:28.422 21:16:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:28.422 21:16:39 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:28.681 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:28.681 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:28.681 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:28.681 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:28.681 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@518 -- # i=1 00:17:28.681 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:28.939 [2024-07-14 21:16:40.331316] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:28.939 [2024-07-14 21:16:40.331394] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:28.939 [2024-07-14 21:16:40.331406] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10cdbf634f00 00:17:28.939 [2024-07-14 21:16:40.331414] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:28.939 [2024-07-14 21:16:40.332083] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:28.939 [2024-07-14 21:16:40.332108] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:28.939 [2024-07-14 21:16:40.332132] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:28.939 [2024-07-14 21:16:40.332145] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:28.939 [2024-07-14 21:16:40.332159] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10cdbf635180 00:17:28.939 [2024-07-14 21:16:40.332163] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:28.939 [2024-07-14 21:16:40.332183] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10cdbf697e20 00:17:28.939 [2024-07-14 21:16:40.332206] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10cdbf635180 00:17:28.939 [2024-07-14 21:16:40.332210] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10cdbf635180 00:17:28.939 [2024-07-14 21:16:40.332225] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:28.939 pt2 00:17:28.939 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:28.939 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:28.939 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:28.939 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:28.939 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:28.939 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:28.939 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:28.939 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:28.939 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:28.939 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:28.939 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:28.939 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.198 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:29.198 "name": "raid_bdev1", 00:17:29.198 "uuid": "581c8d7a-4226-11ef-aa83-81fbc7dfef58", 00:17:29.198 "strip_size_kb": 0, 00:17:29.198 "state": "online", 00:17:29.198 "raid_level": "raid1", 00:17:29.198 "superblock": true, 00:17:29.198 "num_base_bdevs": 2, 00:17:29.198 "num_base_bdevs_discovered": 1, 00:17:29.198 "num_base_bdevs_operational": 1, 00:17:29.198 "base_bdevs_list": [ 00:17:29.198 { 00:17:29.198 "name": null, 00:17:29.198 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:29.198 "is_configured": false, 00:17:29.198 "data_offset": 256, 00:17:29.198 "data_size": 7936 00:17:29.198 }, 00:17:29.198 { 00:17:29.198 "name": "pt2", 00:17:29.198 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:29.198 "is_configured": true, 00:17:29.198 "data_offset": 256, 00:17:29.198 "data_size": 7936 00:17:29.198 } 00:17:29.198 ] 00:17:29.198 }' 00:17:29.198 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:29.198 21:16:40 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:29.457 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:29.457 [2024-07-14 21:16:40.983337] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.457 [2024-07-14 21:16:40.983353] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:29.457 [2024-07-14 21:16:40.983389] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:29.457 [2024-07-14 21:16:40.983399] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:29.457 [2024-07-14 21:16:40.983403] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10cdbf635180 name raid_bdev1, state offline 00:17:29.457 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:29.457 21:16:40 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:29.715 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:29.715 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:29.715 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:17:29.715 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:29.974 [2024-07-14 21:16:41.431347] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:29.974 [2024-07-14 21:16:41.431385] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:29.974 [2024-07-14 21:16:41.431413] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x10cdbf634c80 00:17:29.974 [2024-07-14 21:16:41.431420] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:29.974 [2024-07-14 21:16:41.432228] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:29.974 [2024-07-14 21:16:41.432267] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:29.974 [2024-07-14 21:16:41.432289] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:29.974 [2024-07-14 21:16:41.432301] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:29.974 [2024-07-14 21:16:41.432320] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:29.974 [2024-07-14 21:16:41.432333] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:29.974 [2024-07-14 21:16:41.432340] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10cdbf634780 name raid_bdev1, state configuring 00:17:29.974 [2024-07-14 21:16:41.432348] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:29.974 [2024-07-14 21:16:41.432363] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x10cdbf634780 00:17:29.974 [2024-07-14 21:16:41.432367] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4096 00:17:29.974 [2024-07-14 21:16:41.432400] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x10cdbf697e20 00:17:29.974 [2024-07-14 21:16:41.432424] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x10cdbf634780 00:17:29.974 [2024-07-14 21:16:41.432427] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x10cdbf634780 00:17:29.974 [2024-07-14 21:16:41.432441] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:29.974 pt1 00:17:29.974 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:17:29.974 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:29.974 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:29.974 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:29.974 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:29.974 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:29.974 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:29.974 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:29.974 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:29.974 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:29.974 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:29.974 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:29.974 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:30.233 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:30.233 "name": "raid_bdev1", 00:17:30.233 "uuid": "581c8d7a-4226-11ef-aa83-81fbc7dfef58", 00:17:30.233 "strip_size_kb": 0, 00:17:30.233 "state": "online", 00:17:30.233 "raid_level": "raid1", 00:17:30.233 "superblock": true, 00:17:30.233 "num_base_bdevs": 2, 00:17:30.233 "num_base_bdevs_discovered": 1, 00:17:30.233 "num_base_bdevs_operational": 1, 00:17:30.233 "base_bdevs_list": [ 00:17:30.233 { 00:17:30.233 "name": null, 00:17:30.233 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:30.233 "is_configured": false, 00:17:30.233 "data_offset": 256, 00:17:30.233 "data_size": 7936 00:17:30.233 }, 00:17:30.233 { 00:17:30.233 "name": "pt2", 00:17:30.233 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:30.233 "is_configured": true, 00:17:30.233 "data_offset": 256, 00:17:30.233 "data_size": 7936 00:17:30.233 } 00:17:30.233 ] 00:17:30.233 }' 00:17:30.233 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:30.233 21:16:41 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:30.490 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:30.490 21:16:41 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:30.748 21:16:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:30.748 21:16:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:30.748 21:16:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:31.006 [2024-07-14 21:16:42.343413] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@557 -- # '[' 581c8d7a-4226-11ef-aa83-81fbc7dfef58 '!=' 581c8d7a-4226-11ef-aa83-81fbc7dfef58 ']' 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@562 -- # killprocess 66328 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@948 -- # '[' -z 66328 ']' 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@952 -- # kill -0 66328 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # uname 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # ps -c -o command 66328 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # tail -1 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66328' 00:17:31.006 killing process with pid 66328 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@967 -- # kill 66328 00:17:31.006 [2024-07-14 21:16:42.367917] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:31.006 [2024-07-14 21:16:42.367937] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:31.006 [2024-07-14 21:16:42.367949] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:31.006 [2024-07-14 21:16:42.367952] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x10cdbf634780 name raid_bdev1, state offline 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@972 -- # wait 66328 00:17:31.006 [2024-07-14 21:16:42.379673] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- bdev/bdev_raid.sh@564 -- # return 0 00:17:31.006 00:17:31.006 real 0m12.204s 00:17:31.006 user 0m21.698s 00:17:31.006 sys 0m1.985s 00:17:31.006 ************************************ 00:17:31.006 END TEST raid_superblock_test_md_separate 00:17:31.006 ************************************ 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:31.006 21:16:42 bdev_raid.raid_superblock_test_md_separate -- common/autotest_common.sh@10 -- # set +x 00:17:31.265 21:16:42 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:31.265 21:16:42 bdev_raid -- bdev/bdev_raid.sh@907 -- # '[' '' = true ']' 00:17:31.265 21:16:42 bdev_raid -- bdev/bdev_raid.sh@911 -- # base_malloc_params='-m 32 -i' 00:17:31.265 21:16:42 bdev_raid -- bdev/bdev_raid.sh@912 -- # run_test raid_state_function_test_sb_md_interleaved raid_state_function_test raid1 2 true 00:17:31.265 21:16:42 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:17:31.265 21:16:42 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:31.265 21:16:42 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:31.265 ************************************ 00:17:31.265 START TEST raid_state_function_test_sb_md_interleaved 00:17:31.265 ************************************ 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_state_function_test raid1 2 true 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@220 -- # local raid_level=raid1 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@221 -- # local num_base_bdevs=2 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@222 -- # local superblock=true 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@223 -- # local raid_bdev 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i = 1 )) 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev1 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # echo BaseBdev2 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i++ )) 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # (( i <= num_base_bdevs )) 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@224 -- # local base_bdevs 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@225 -- # local raid_bdev_name=Existed_Raid 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@226 -- # local strip_size 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@227 -- # local strip_size_create_arg 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@228 -- # local superblock_create_arg 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@230 -- # '[' raid1 '!=' raid1 ']' 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@234 -- # strip_size=0 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@237 -- # '[' true = true ']' 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@238 -- # superblock_create_arg=-s 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@244 -- # raid_pid=66715 00:17:31.265 Process raid pid: 66715 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@245 -- # echo 'Process raid pid: 66715' 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@246 -- # waitforlisten 66715 /var/tmp/spdk-raid.sock 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@243 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -i 0 -L bdev_raid 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 66715 ']' 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:31.265 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:31.265 21:16:42 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:31.265 [2024-07-14 21:16:42.599256] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:31.265 [2024-07-14 21:16:42.599491] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:31.833 EAL: TSC is not safe to use in SMP mode 00:17:31.833 EAL: TSC is not invariant 00:17:31.833 [2024-07-14 21:16:43.118395] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:31.833 [2024-07-14 21:16:43.191245] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:31.833 [2024-07-14 21:16:43.193624] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:31.833 [2024-07-14 21:16:43.194508] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:31.833 [2024-07-14 21:16:43.194538] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:32.401 21:16:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:32.401 21:16:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:17:32.401 21:16:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@250 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:32.401 [2024-07-14 21:16:43.818979] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:32.401 [2024-07-14 21:16:43.819011] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:32.401 [2024-07-14 21:16:43.819015] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:32.401 [2024-07-14 21:16:43.819038] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:32.401 21:16:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@251 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:32.401 21:16:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:32.401 21:16:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:32.401 21:16:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:32.401 21:16:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:32.401 21:16:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:32.401 21:16:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:32.401 21:16:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:32.401 21:16:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:32.401 21:16:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:32.401 21:16:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:32.401 21:16:43 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:32.660 21:16:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:32.660 "name": "Existed_Raid", 00:17:32.660 "uuid": "5ec8e6b0-4226-11ef-aa83-81fbc7dfef58", 00:17:32.660 "strip_size_kb": 0, 00:17:32.660 "state": "configuring", 00:17:32.660 "raid_level": "raid1", 00:17:32.660 "superblock": true, 00:17:32.660 "num_base_bdevs": 2, 00:17:32.660 "num_base_bdevs_discovered": 0, 00:17:32.660 "num_base_bdevs_operational": 2, 00:17:32.660 "base_bdevs_list": [ 00:17:32.660 { 00:17:32.660 "name": "BaseBdev1", 00:17:32.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.660 "is_configured": false, 00:17:32.660 "data_offset": 0, 00:17:32.660 "data_size": 0 00:17:32.660 }, 00:17:32.660 { 00:17:32.660 "name": "BaseBdev2", 00:17:32.660 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:32.660 "is_configured": false, 00:17:32.660 "data_offset": 0, 00:17:32.660 "data_size": 0 00:17:32.660 } 00:17:32.660 ] 00:17:32.660 }' 00:17:32.660 21:16:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:32.660 21:16:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:32.919 21:16:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@252 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:33.184 [2024-07-14 21:16:44.518974] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:33.184 [2024-07-14 21:16:44.518991] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15779e034500 name Existed_Raid, state configuring 00:17:33.184 21:16:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@256 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:33.184 [2024-07-14 21:16:44.714980] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev1 00:17:33.184 [2024-07-14 21:16:44.715008] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev1 doesn't exist now 00:17:33.184 [2024-07-14 21:16:44.715012] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:33.184 [2024-07-14 21:16:44.715034] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:33.476 21:16:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@257 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1 00:17:33.476 [2024-07-14 21:16:44.911805] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:33.476 BaseBdev1 00:17:33.476 21:16:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@258 -- # waitforbdev BaseBdev1 00:17:33.476 21:16:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev1 00:17:33.476 21:16:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:33.476 21:16:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:17:33.476 21:16:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:33.476 21:16:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:33.476 21:16:44 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:33.743 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 -t 2000 00:17:34.006 [ 00:17:34.006 { 00:17:34.006 "name": "BaseBdev1", 00:17:34.006 "aliases": [ 00:17:34.006 "5f6f88bf-4226-11ef-aa83-81fbc7dfef58" 00:17:34.006 ], 00:17:34.006 "product_name": "Malloc disk", 00:17:34.006 "block_size": 4128, 00:17:34.006 "num_blocks": 8192, 00:17:34.006 "uuid": "5f6f88bf-4226-11ef-aa83-81fbc7dfef58", 00:17:34.006 "md_size": 32, 00:17:34.006 "md_interleave": true, 00:17:34.006 "dif_type": 0, 00:17:34.006 "assigned_rate_limits": { 00:17:34.006 "rw_ios_per_sec": 0, 00:17:34.006 "rw_mbytes_per_sec": 0, 00:17:34.006 "r_mbytes_per_sec": 0, 00:17:34.006 "w_mbytes_per_sec": 0 00:17:34.006 }, 00:17:34.006 "claimed": true, 00:17:34.006 "claim_type": "exclusive_write", 00:17:34.006 "zoned": false, 00:17:34.006 "supported_io_types": { 00:17:34.006 "read": true, 00:17:34.006 "write": true, 00:17:34.006 "unmap": true, 00:17:34.006 "flush": true, 00:17:34.006 "reset": true, 00:17:34.006 "nvme_admin": false, 00:17:34.006 "nvme_io": false, 00:17:34.006 "nvme_io_md": false, 00:17:34.006 "write_zeroes": true, 00:17:34.006 "zcopy": true, 00:17:34.006 "get_zone_info": false, 00:17:34.006 "zone_management": false, 00:17:34.006 "zone_append": false, 00:17:34.006 "compare": false, 00:17:34.006 "compare_and_write": false, 00:17:34.006 "abort": true, 00:17:34.006 "seek_hole": false, 00:17:34.006 "seek_data": false, 00:17:34.006 "copy": true, 00:17:34.006 "nvme_iov_md": false 00:17:34.006 }, 00:17:34.006 "memory_domains": [ 00:17:34.006 { 00:17:34.006 "dma_device_id": "system", 00:17:34.006 "dma_device_type": 1 00:17:34.006 }, 00:17:34.006 { 00:17:34.006 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:34.006 "dma_device_type": 2 00:17:34.006 } 00:17:34.006 ], 00:17:34.006 "driver_specific": {} 00:17:34.006 } 00:17:34.006 ] 00:17:34.006 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:17:34.006 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@259 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:34.006 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:34.006 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:34.006 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:34.006 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:34.006 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:34.006 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:34.006 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:34.006 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:34.006 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:34.006 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:34.006 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:34.265 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:34.265 "name": "Existed_Raid", 00:17:34.265 "uuid": "5f519ec2-4226-11ef-aa83-81fbc7dfef58", 00:17:34.265 "strip_size_kb": 0, 00:17:34.265 "state": "configuring", 00:17:34.265 "raid_level": "raid1", 00:17:34.265 "superblock": true, 00:17:34.265 "num_base_bdevs": 2, 00:17:34.265 "num_base_bdevs_discovered": 1, 00:17:34.265 "num_base_bdevs_operational": 2, 00:17:34.265 "base_bdevs_list": [ 00:17:34.265 { 00:17:34.265 "name": "BaseBdev1", 00:17:34.265 "uuid": "5f6f88bf-4226-11ef-aa83-81fbc7dfef58", 00:17:34.265 "is_configured": true, 00:17:34.265 "data_offset": 256, 00:17:34.265 "data_size": 7936 00:17:34.265 }, 00:17:34.265 { 00:17:34.265 "name": "BaseBdev2", 00:17:34.265 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:34.265 "is_configured": false, 00:17:34.265 "data_offset": 0, 00:17:34.265 "data_size": 0 00:17:34.265 } 00:17:34.265 ] 00:17:34.265 }' 00:17:34.265 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:34.265 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:34.523 21:16:45 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@260 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete Existed_Raid 00:17:34.782 [2024-07-14 21:16:46.099020] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: Existed_Raid 00:17:34.782 [2024-07-14 21:16:46.099042] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15779e034500 name Existed_Raid, state configuring 00:17:34.782 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@264 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n Existed_Raid 00:17:35.040 [2024-07-14 21:16:46.347047] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:35.040 [2024-07-14 21:16:46.347877] bdev.c:8157:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: BaseBdev2 00:17:35.040 [2024-07-14 21:16:46.347926] bdev_raid_rpc.c: 311:rpc_bdev_raid_create: *DEBUG*: base bdev BaseBdev2 doesn't exist now 00:17:35.040 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i = 1 )) 00:17:35.040 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:35.040 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@266 -- # verify_raid_bdev_state Existed_Raid configuring raid1 0 2 00:17:35.040 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:35.040 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:35.040 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:35.040 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:35.040 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:35.040 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:35.040 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:35.040 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:35.040 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:35.040 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:35.040 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:35.299 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:35.299 "name": "Existed_Raid", 00:17:35.299 "uuid": "604aa73d-4226-11ef-aa83-81fbc7dfef58", 00:17:35.299 "strip_size_kb": 0, 00:17:35.299 "state": "configuring", 00:17:35.299 "raid_level": "raid1", 00:17:35.299 "superblock": true, 00:17:35.299 "num_base_bdevs": 2, 00:17:35.299 "num_base_bdevs_discovered": 1, 00:17:35.299 "num_base_bdevs_operational": 2, 00:17:35.299 "base_bdevs_list": [ 00:17:35.299 { 00:17:35.299 "name": "BaseBdev1", 00:17:35.299 "uuid": "5f6f88bf-4226-11ef-aa83-81fbc7dfef58", 00:17:35.299 "is_configured": true, 00:17:35.299 "data_offset": 256, 00:17:35.299 "data_size": 7936 00:17:35.299 }, 00:17:35.299 { 00:17:35.299 "name": "BaseBdev2", 00:17:35.299 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:35.299 "is_configured": false, 00:17:35.299 "data_offset": 0, 00:17:35.299 "data_size": 0 00:17:35.299 } 00:17:35.299 ] 00:17:35.299 }' 00:17:35.299 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:35.299 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:35.558 21:16:46 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@267 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2 00:17:35.816 [2024-07-14 21:16:47.159130] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:35.816 [2024-07-14 21:16:47.159175] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x15779e034a00 00:17:35.816 [2024-07-14 21:16:47.159181] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:35.816 [2024-07-14 21:16:47.159197] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x15779e097e20 00:17:35.816 [2024-07-14 21:16:47.159209] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x15779e034a00 00:17:35.816 [2024-07-14 21:16:47.159212] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name Existed_Raid, raid_bdev 0x15779e034a00 00:17:35.816 [2024-07-14 21:16:47.159222] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:35.816 BaseBdev2 00:17:35.816 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@268 -- # waitforbdev BaseBdev2 00:17:35.816 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@897 -- # local bdev_name=BaseBdev2 00:17:35.816 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:17:35.816 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@899 -- # local i 00:17:35.817 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:17:35.817 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:17:35.817 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@902 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_wait_for_examine 00:17:36.076 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@904 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 -t 2000 00:17:36.076 [ 00:17:36.076 { 00:17:36.076 "name": "BaseBdev2", 00:17:36.076 "aliases": [ 00:17:36.076 "60c68f21-4226-11ef-aa83-81fbc7dfef58" 00:17:36.076 ], 00:17:36.076 "product_name": "Malloc disk", 00:17:36.076 "block_size": 4128, 00:17:36.076 "num_blocks": 8192, 00:17:36.076 "uuid": "60c68f21-4226-11ef-aa83-81fbc7dfef58", 00:17:36.076 "md_size": 32, 00:17:36.076 "md_interleave": true, 00:17:36.076 "dif_type": 0, 00:17:36.076 "assigned_rate_limits": { 00:17:36.076 "rw_ios_per_sec": 0, 00:17:36.076 "rw_mbytes_per_sec": 0, 00:17:36.076 "r_mbytes_per_sec": 0, 00:17:36.076 "w_mbytes_per_sec": 0 00:17:36.076 }, 00:17:36.076 "claimed": true, 00:17:36.076 "claim_type": "exclusive_write", 00:17:36.076 "zoned": false, 00:17:36.076 "supported_io_types": { 00:17:36.076 "read": true, 00:17:36.076 "write": true, 00:17:36.076 "unmap": true, 00:17:36.076 "flush": true, 00:17:36.076 "reset": true, 00:17:36.076 "nvme_admin": false, 00:17:36.076 "nvme_io": false, 00:17:36.076 "nvme_io_md": false, 00:17:36.076 "write_zeroes": true, 00:17:36.076 "zcopy": true, 00:17:36.076 "get_zone_info": false, 00:17:36.076 "zone_management": false, 00:17:36.076 "zone_append": false, 00:17:36.076 "compare": false, 00:17:36.076 "compare_and_write": false, 00:17:36.076 "abort": true, 00:17:36.076 "seek_hole": false, 00:17:36.076 "seek_data": false, 00:17:36.076 "copy": true, 00:17:36.076 "nvme_iov_md": false 00:17:36.076 }, 00:17:36.076 "memory_domains": [ 00:17:36.076 { 00:17:36.076 "dma_device_id": "system", 00:17:36.076 "dma_device_type": 1 00:17:36.076 }, 00:17:36.076 { 00:17:36.076 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.076 "dma_device_type": 2 00:17:36.076 } 00:17:36.076 ], 00:17:36.076 "driver_specific": {} 00:17:36.076 } 00:17:36.076 ] 00:17:36.076 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@905 -- # return 0 00:17:36.076 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i++ )) 00:17:36.076 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@265 -- # (( i < num_base_bdevs )) 00:17:36.076 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@270 -- # verify_raid_bdev_state Existed_Raid online raid1 0 2 00:17:36.076 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:36.076 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:36.076 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:36.076 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:36.076 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:36.076 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:36.334 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:36.334 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:36.334 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:36.334 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:36.334 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:36.591 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:36.591 "name": "Existed_Raid", 00:17:36.591 "uuid": "604aa73d-4226-11ef-aa83-81fbc7dfef58", 00:17:36.591 "strip_size_kb": 0, 00:17:36.591 "state": "online", 00:17:36.591 "raid_level": "raid1", 00:17:36.591 "superblock": true, 00:17:36.591 "num_base_bdevs": 2, 00:17:36.591 "num_base_bdevs_discovered": 2, 00:17:36.591 "num_base_bdevs_operational": 2, 00:17:36.591 "base_bdevs_list": [ 00:17:36.591 { 00:17:36.591 "name": "BaseBdev1", 00:17:36.591 "uuid": "5f6f88bf-4226-11ef-aa83-81fbc7dfef58", 00:17:36.591 "is_configured": true, 00:17:36.591 "data_offset": 256, 00:17:36.591 "data_size": 7936 00:17:36.591 }, 00:17:36.591 { 00:17:36.591 "name": "BaseBdev2", 00:17:36.591 "uuid": "60c68f21-4226-11ef-aa83-81fbc7dfef58", 00:17:36.591 "is_configured": true, 00:17:36.591 "data_offset": 256, 00:17:36.591 "data_size": 7936 00:17:36.591 } 00:17:36.591 ] 00:17:36.591 }' 00:17:36.591 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:36.591 21:16:47 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:36.848 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@271 -- # verify_raid_bdev_properties Existed_Raid 00:17:36.849 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=Existed_Raid 00:17:36.849 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:36.849 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:36.849 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:36.849 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:17:36.849 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b Existed_Raid 00:17:36.849 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:36.849 [2024-07-14 21:16:48.327131] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:36.849 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:36.849 "name": "Existed_Raid", 00:17:36.849 "aliases": [ 00:17:36.849 "604aa73d-4226-11ef-aa83-81fbc7dfef58" 00:17:36.849 ], 00:17:36.849 "product_name": "Raid Volume", 00:17:36.849 "block_size": 4128, 00:17:36.849 "num_blocks": 7936, 00:17:36.849 "uuid": "604aa73d-4226-11ef-aa83-81fbc7dfef58", 00:17:36.849 "md_size": 32, 00:17:36.849 "md_interleave": true, 00:17:36.849 "dif_type": 0, 00:17:36.849 "assigned_rate_limits": { 00:17:36.849 "rw_ios_per_sec": 0, 00:17:36.849 "rw_mbytes_per_sec": 0, 00:17:36.849 "r_mbytes_per_sec": 0, 00:17:36.849 "w_mbytes_per_sec": 0 00:17:36.849 }, 00:17:36.849 "claimed": false, 00:17:36.849 "zoned": false, 00:17:36.849 "supported_io_types": { 00:17:36.849 "read": true, 00:17:36.849 "write": true, 00:17:36.849 "unmap": false, 00:17:36.849 "flush": false, 00:17:36.849 "reset": true, 00:17:36.849 "nvme_admin": false, 00:17:36.849 "nvme_io": false, 00:17:36.849 "nvme_io_md": false, 00:17:36.849 "write_zeroes": true, 00:17:36.849 "zcopy": false, 00:17:36.849 "get_zone_info": false, 00:17:36.849 "zone_management": false, 00:17:36.849 "zone_append": false, 00:17:36.849 "compare": false, 00:17:36.849 "compare_and_write": false, 00:17:36.849 "abort": false, 00:17:36.849 "seek_hole": false, 00:17:36.849 "seek_data": false, 00:17:36.849 "copy": false, 00:17:36.849 "nvme_iov_md": false 00:17:36.849 }, 00:17:36.849 "memory_domains": [ 00:17:36.849 { 00:17:36.849 "dma_device_id": "system", 00:17:36.849 "dma_device_type": 1 00:17:36.849 }, 00:17:36.849 { 00:17:36.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.849 "dma_device_type": 2 00:17:36.849 }, 00:17:36.849 { 00:17:36.849 "dma_device_id": "system", 00:17:36.849 "dma_device_type": 1 00:17:36.849 }, 00:17:36.849 { 00:17:36.849 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:36.849 "dma_device_type": 2 00:17:36.849 } 00:17:36.849 ], 00:17:36.849 "driver_specific": { 00:17:36.849 "raid": { 00:17:36.849 "uuid": "604aa73d-4226-11ef-aa83-81fbc7dfef58", 00:17:36.849 "strip_size_kb": 0, 00:17:36.849 "state": "online", 00:17:36.849 "raid_level": "raid1", 00:17:36.849 "superblock": true, 00:17:36.849 "num_base_bdevs": 2, 00:17:36.849 "num_base_bdevs_discovered": 2, 00:17:36.849 "num_base_bdevs_operational": 2, 00:17:36.849 "base_bdevs_list": [ 00:17:36.849 { 00:17:36.849 "name": "BaseBdev1", 00:17:36.849 "uuid": "5f6f88bf-4226-11ef-aa83-81fbc7dfef58", 00:17:36.849 "is_configured": true, 00:17:36.849 "data_offset": 256, 00:17:36.849 "data_size": 7936 00:17:36.849 }, 00:17:36.849 { 00:17:36.849 "name": "BaseBdev2", 00:17:36.849 "uuid": "60c68f21-4226-11ef-aa83-81fbc7dfef58", 00:17:36.849 "is_configured": true, 00:17:36.849 "data_offset": 256, 00:17:36.849 "data_size": 7936 00:17:36.849 } 00:17:36.849 ] 00:17:36.849 } 00:17:36.849 } 00:17:36.849 }' 00:17:36.849 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:36.849 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='BaseBdev1 00:17:36.849 BaseBdev2' 00:17:36.849 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:36.849 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev1 00:17:36.849 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:37.106 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:37.106 "name": "BaseBdev1", 00:17:37.106 "aliases": [ 00:17:37.106 "5f6f88bf-4226-11ef-aa83-81fbc7dfef58" 00:17:37.106 ], 00:17:37.106 "product_name": "Malloc disk", 00:17:37.106 "block_size": 4128, 00:17:37.106 "num_blocks": 8192, 00:17:37.106 "uuid": "5f6f88bf-4226-11ef-aa83-81fbc7dfef58", 00:17:37.106 "md_size": 32, 00:17:37.106 "md_interleave": true, 00:17:37.106 "dif_type": 0, 00:17:37.106 "assigned_rate_limits": { 00:17:37.106 "rw_ios_per_sec": 0, 00:17:37.106 "rw_mbytes_per_sec": 0, 00:17:37.106 "r_mbytes_per_sec": 0, 00:17:37.106 "w_mbytes_per_sec": 0 00:17:37.106 }, 00:17:37.106 "claimed": true, 00:17:37.106 "claim_type": "exclusive_write", 00:17:37.106 "zoned": false, 00:17:37.106 "supported_io_types": { 00:17:37.106 "read": true, 00:17:37.106 "write": true, 00:17:37.106 "unmap": true, 00:17:37.106 "flush": true, 00:17:37.106 "reset": true, 00:17:37.106 "nvme_admin": false, 00:17:37.106 "nvme_io": false, 00:17:37.106 "nvme_io_md": false, 00:17:37.106 "write_zeroes": true, 00:17:37.106 "zcopy": true, 00:17:37.106 "get_zone_info": false, 00:17:37.106 "zone_management": false, 00:17:37.106 "zone_append": false, 00:17:37.106 "compare": false, 00:17:37.106 "compare_and_write": false, 00:17:37.106 "abort": true, 00:17:37.106 "seek_hole": false, 00:17:37.106 "seek_data": false, 00:17:37.106 "copy": true, 00:17:37.106 "nvme_iov_md": false 00:17:37.106 }, 00:17:37.106 "memory_domains": [ 00:17:37.106 { 00:17:37.106 "dma_device_id": "system", 00:17:37.106 "dma_device_type": 1 00:17:37.106 }, 00:17:37.106 { 00:17:37.106 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.106 "dma_device_type": 2 00:17:37.106 } 00:17:37.106 ], 00:17:37.106 "driver_specific": {} 00:17:37.106 }' 00:17:37.106 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.107 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.107 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:37.107 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.107 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.107 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:37.107 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.107 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.107 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:37.107 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.107 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.107 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:37.107 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:37.107 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b BaseBdev2 00:17:37.107 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:37.364 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:37.364 "name": "BaseBdev2", 00:17:37.364 "aliases": [ 00:17:37.364 "60c68f21-4226-11ef-aa83-81fbc7dfef58" 00:17:37.364 ], 00:17:37.364 "product_name": "Malloc disk", 00:17:37.364 "block_size": 4128, 00:17:37.364 "num_blocks": 8192, 00:17:37.364 "uuid": "60c68f21-4226-11ef-aa83-81fbc7dfef58", 00:17:37.364 "md_size": 32, 00:17:37.364 "md_interleave": true, 00:17:37.364 "dif_type": 0, 00:17:37.364 "assigned_rate_limits": { 00:17:37.364 "rw_ios_per_sec": 0, 00:17:37.364 "rw_mbytes_per_sec": 0, 00:17:37.364 "r_mbytes_per_sec": 0, 00:17:37.364 "w_mbytes_per_sec": 0 00:17:37.364 }, 00:17:37.364 "claimed": true, 00:17:37.364 "claim_type": "exclusive_write", 00:17:37.364 "zoned": false, 00:17:37.364 "supported_io_types": { 00:17:37.364 "read": true, 00:17:37.364 "write": true, 00:17:37.364 "unmap": true, 00:17:37.364 "flush": true, 00:17:37.364 "reset": true, 00:17:37.364 "nvme_admin": false, 00:17:37.364 "nvme_io": false, 00:17:37.364 "nvme_io_md": false, 00:17:37.364 "write_zeroes": true, 00:17:37.364 "zcopy": true, 00:17:37.364 "get_zone_info": false, 00:17:37.364 "zone_management": false, 00:17:37.364 "zone_append": false, 00:17:37.364 "compare": false, 00:17:37.364 "compare_and_write": false, 00:17:37.364 "abort": true, 00:17:37.364 "seek_hole": false, 00:17:37.364 "seek_data": false, 00:17:37.364 "copy": true, 00:17:37.364 "nvme_iov_md": false 00:17:37.364 }, 00:17:37.364 "memory_domains": [ 00:17:37.364 { 00:17:37.364 "dma_device_id": "system", 00:17:37.364 "dma_device_type": 1 00:17:37.364 }, 00:17:37.364 { 00:17:37.364 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:37.364 "dma_device_type": 2 00:17:37.364 } 00:17:37.364 ], 00:17:37.364 "driver_specific": {} 00:17:37.364 }' 00:17:37.364 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.364 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:37.364 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:37.364 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.622 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:37.622 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:37.622 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.622 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:37.622 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:37.622 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.623 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:37.623 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:37.623 21:16:48 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@274 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev1 00:17:37.881 [2024-07-14 21:16:49.199160] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@275 -- # local expected_state 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@276 -- # has_redundancy raid1 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@279 -- # expected_state=online 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@281 -- # verify_raid_bdev_state Existed_Raid online raid1 0 1 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=Existed_Raid 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "Existed_Raid")' 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:37.881 "name": "Existed_Raid", 00:17:37.881 "uuid": "604aa73d-4226-11ef-aa83-81fbc7dfef58", 00:17:37.881 "strip_size_kb": 0, 00:17:37.881 "state": "online", 00:17:37.881 "raid_level": "raid1", 00:17:37.881 "superblock": true, 00:17:37.881 "num_base_bdevs": 2, 00:17:37.881 "num_base_bdevs_discovered": 1, 00:17:37.881 "num_base_bdevs_operational": 1, 00:17:37.881 "base_bdevs_list": [ 00:17:37.881 { 00:17:37.881 "name": null, 00:17:37.881 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:37.881 "is_configured": false, 00:17:37.881 "data_offset": 256, 00:17:37.881 "data_size": 7936 00:17:37.881 }, 00:17:37.881 { 00:17:37.881 "name": "BaseBdev2", 00:17:37.881 "uuid": "60c68f21-4226-11ef-aa83-81fbc7dfef58", 00:17:37.881 "is_configured": true, 00:17:37.881 "data_offset": 256, 00:17:37.881 "data_size": 7936 00:17:37.881 } 00:17:37.881 ] 00:17:37.881 }' 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:37.881 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:38.447 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i = 1 )) 00:17:38.447 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:38.447 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.447 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # jq -r '.[0]["name"]' 00:17:38.447 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@286 -- # raid_bdev=Existed_Raid 00:17:38.447 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@287 -- # '[' Existed_Raid '!=' Existed_Raid ']' 00:17:38.447 21:16:49 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@291 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_delete BaseBdev2 00:17:38.706 [2024-07-14 21:16:50.128976] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev2 00:17:38.706 [2024-07-14 21:16:50.129026] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:38.706 [2024-07-14 21:16:50.134874] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:38.706 [2024-07-14 21:16:50.134888] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:38.706 [2024-07-14 21:16:50.134908] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x15779e034a00 name Existed_Raid, state offline 00:17:38.706 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i++ )) 00:17:38.706 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@285 -- # (( i < num_base_bdevs )) 00:17:38.706 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:38.706 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # jq -r '.[0]["name"] | select(.)' 00:17:38.965 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@293 -- # raid_bdev= 00:17:38.965 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@294 -- # '[' -n '' ']' 00:17:38.965 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@299 -- # '[' 2 -gt 2 ']' 00:17:38.965 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@341 -- # killprocess 66715 00:17:38.965 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 66715 ']' 00:17:38.965 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 66715 00:17:38.965 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:17:38.965 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:38.965 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 66715 00:17:38.965 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:17:38.965 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:38.965 killing process with pid 66715 00:17:38.965 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:38.965 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66715' 00:17:38.965 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 66715 00:17:38.965 [2024-07-14 21:16:50.366307] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:38.965 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 66715 00:17:38.965 [2024-07-14 21:16:50.366340] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:39.223 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- bdev/bdev_raid.sh@343 -- # return 0 00:17:39.223 00:17:39.223 real 0m7.963s 00:17:39.223 user 0m13.763s 00:17:39.223 sys 0m1.392s 00:17:39.223 ************************************ 00:17:39.223 END TEST raid_state_function_test_sb_md_interleaved 00:17:39.223 ************************************ 00:17:39.223 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:39.223 21:16:50 bdev_raid.raid_state_function_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.223 21:16:50 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:39.223 21:16:50 bdev_raid -- bdev/bdev_raid.sh@913 -- # run_test raid_superblock_test_md_interleaved raid_superblock_test raid1 2 00:17:39.223 21:16:50 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:17:39.223 21:16:50 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:39.223 21:16:50 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:39.223 ************************************ 00:17:39.223 START TEST raid_superblock_test_md_interleaved 00:17:39.223 ************************************ 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1123 -- # raid_superblock_test raid1 2 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@392 -- # local raid_level=raid1 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@393 -- # local num_base_bdevs=2 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # base_bdevs_malloc=() 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@394 -- # local base_bdevs_malloc 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # base_bdevs_pt=() 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@395 -- # local base_bdevs_pt 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # base_bdevs_pt_uuid=() 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@396 -- # local base_bdevs_pt_uuid 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@397 -- # local raid_bdev_name=raid_bdev1 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@398 -- # local strip_size 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@399 -- # local strip_size_create_arg 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@400 -- # local raid_bdev_uuid 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@401 -- # local raid_bdev 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@403 -- # '[' raid1 '!=' raid1 ']' 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@407 -- # strip_size=0 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@411 -- # raid_pid=66981 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@412 -- # waitforlisten 66981 /var/tmp/spdk-raid.sock 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 66981 ']' 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@410 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-raid.sock -L bdev_raid 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:39.223 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:39.224 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:39.224 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:39.224 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:39.224 21:16:50 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:39.224 [2024-07-14 21:16:50.610087] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:39.224 [2024-07-14 21:16:50.610343] [ DPDK EAL parameters: bdev_svc --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:39.790 EAL: TSC is not safe to use in SMP mode 00:17:39.790 EAL: TSC is not invariant 00:17:39.790 [2024-07-14 21:16:51.152909] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:39.790 [2024-07-14 21:16:51.226941] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:39.790 [2024-07-14 21:16:51.229303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:39.790 [2024-07-14 21:16:51.230193] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:39.790 [2024-07-14 21:16:51.230205] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:40.356 21:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:40.356 21:16:51 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:17:40.356 21:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i = 1 )) 00:17:40.356 21:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:40.356 21:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc1 00:17:40.356 21:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt1 00:17:40.356 21:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000001 00:17:40.356 21:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:40.356 21:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:40.356 21:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:40.356 21:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc1 00:17:40.356 malloc1 00:17:40.356 21:16:51 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:40.614 [2024-07-14 21:16:52.124979] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:40.614 [2024-07-14 21:16:52.125043] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:40.614 [2024-07-14 21:16:52.125070] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc4969234780 00:17:40.614 [2024-07-14 21:16:52.125077] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:40.614 [2024-07-14 21:16:52.125960] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:40.614 [2024-07-14 21:16:52.125983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:40.614 pt1 00:17:40.614 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:40.614 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:40.614 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@416 -- # local bdev_malloc=malloc2 00:17:40.614 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@417 -- # local bdev_pt=pt2 00:17:40.614 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@418 -- # local bdev_pt_uuid=00000000-0000-0000-0000-000000000002 00:17:40.614 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@420 -- # base_bdevs_malloc+=($bdev_malloc) 00:17:40.614 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@421 -- # base_bdevs_pt+=($bdev_pt) 00:17:40.614 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@422 -- # base_bdevs_pt_uuid+=($bdev_pt_uuid) 00:17:40.614 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@424 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b malloc2 00:17:40.872 malloc2 00:17:40.872 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@425 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:41.130 [2024-07-14 21:16:52.588976] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:41.130 [2024-07-14 21:16:52.589035] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:41.130 [2024-07-14 21:16:52.589062] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc4969234c80 00:17:41.130 [2024-07-14 21:16:52.589069] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:41.130 [2024-07-14 21:16:52.589672] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:41.130 [2024-07-14 21:16:52.589697] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:41.130 pt2 00:17:41.130 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i++ )) 00:17:41.130 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@415 -- # (( i <= num_base_bdevs )) 00:17:41.130 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@429 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'pt1 pt2' -n raid_bdev1 -s 00:17:41.389 [2024-07-14 21:16:52.856998] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:41.389 [2024-07-14 21:16:52.857653] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:41.389 [2024-07-14 21:16:52.857742] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc4969234f00 00:17:41.389 [2024-07-14 21:16:52.857748] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:41.389 [2024-07-14 21:16:52.857785] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc4969297e20 00:17:41.389 [2024-07-14 21:16:52.857815] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc4969234f00 00:17:41.389 [2024-07-14 21:16:52.857819] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc4969234f00 00:17:41.389 [2024-07-14 21:16:52.857832] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:41.389 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@430 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:41.389 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:41.389 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:41.389 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:41.389 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:41.389 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:41.389 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:41.389 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:41.389 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:41.389 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:41.389 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:41.389 21:16:52 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:41.648 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:41.648 "name": "raid_bdev1", 00:17:41.648 "uuid": "642bfe22-4226-11ef-aa83-81fbc7dfef58", 00:17:41.648 "strip_size_kb": 0, 00:17:41.648 "state": "online", 00:17:41.648 "raid_level": "raid1", 00:17:41.648 "superblock": true, 00:17:41.648 "num_base_bdevs": 2, 00:17:41.648 "num_base_bdevs_discovered": 2, 00:17:41.648 "num_base_bdevs_operational": 2, 00:17:41.648 "base_bdevs_list": [ 00:17:41.648 { 00:17:41.648 "name": "pt1", 00:17:41.648 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:41.648 "is_configured": true, 00:17:41.648 "data_offset": 256, 00:17:41.648 "data_size": 7936 00:17:41.648 }, 00:17:41.648 { 00:17:41.648 "name": "pt2", 00:17:41.648 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:41.648 "is_configured": true, 00:17:41.648 "data_offset": 256, 00:17:41.648 "data_size": 7936 00:17:41.648 } 00:17:41.648 ] 00:17:41.648 }' 00:17:41.648 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:41.648 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:41.918 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@431 -- # verify_raid_bdev_properties raid_bdev1 00:17:41.918 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:41.918 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:41.918 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:41.918 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:41.918 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:17:41.918 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:41.918 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:42.175 [2024-07-14 21:16:53.645031] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:42.175 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:42.175 "name": "raid_bdev1", 00:17:42.175 "aliases": [ 00:17:42.176 "642bfe22-4226-11ef-aa83-81fbc7dfef58" 00:17:42.176 ], 00:17:42.176 "product_name": "Raid Volume", 00:17:42.176 "block_size": 4128, 00:17:42.176 "num_blocks": 7936, 00:17:42.176 "uuid": "642bfe22-4226-11ef-aa83-81fbc7dfef58", 00:17:42.176 "md_size": 32, 00:17:42.176 "md_interleave": true, 00:17:42.176 "dif_type": 0, 00:17:42.176 "assigned_rate_limits": { 00:17:42.176 "rw_ios_per_sec": 0, 00:17:42.176 "rw_mbytes_per_sec": 0, 00:17:42.176 "r_mbytes_per_sec": 0, 00:17:42.176 "w_mbytes_per_sec": 0 00:17:42.176 }, 00:17:42.176 "claimed": false, 00:17:42.176 "zoned": false, 00:17:42.176 "supported_io_types": { 00:17:42.176 "read": true, 00:17:42.176 "write": true, 00:17:42.176 "unmap": false, 00:17:42.176 "flush": false, 00:17:42.176 "reset": true, 00:17:42.176 "nvme_admin": false, 00:17:42.176 "nvme_io": false, 00:17:42.176 "nvme_io_md": false, 00:17:42.176 "write_zeroes": true, 00:17:42.176 "zcopy": false, 00:17:42.176 "get_zone_info": false, 00:17:42.176 "zone_management": false, 00:17:42.176 "zone_append": false, 00:17:42.176 "compare": false, 00:17:42.176 "compare_and_write": false, 00:17:42.176 "abort": false, 00:17:42.176 "seek_hole": false, 00:17:42.176 "seek_data": false, 00:17:42.176 "copy": false, 00:17:42.176 "nvme_iov_md": false 00:17:42.176 }, 00:17:42.176 "memory_domains": [ 00:17:42.176 { 00:17:42.176 "dma_device_id": "system", 00:17:42.176 "dma_device_type": 1 00:17:42.176 }, 00:17:42.176 { 00:17:42.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.176 "dma_device_type": 2 00:17:42.176 }, 00:17:42.176 { 00:17:42.176 "dma_device_id": "system", 00:17:42.176 "dma_device_type": 1 00:17:42.176 }, 00:17:42.176 { 00:17:42.176 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.176 "dma_device_type": 2 00:17:42.176 } 00:17:42.176 ], 00:17:42.176 "driver_specific": { 00:17:42.176 "raid": { 00:17:42.176 "uuid": "642bfe22-4226-11ef-aa83-81fbc7dfef58", 00:17:42.176 "strip_size_kb": 0, 00:17:42.176 "state": "online", 00:17:42.176 "raid_level": "raid1", 00:17:42.176 "superblock": true, 00:17:42.176 "num_base_bdevs": 2, 00:17:42.176 "num_base_bdevs_discovered": 2, 00:17:42.176 "num_base_bdevs_operational": 2, 00:17:42.176 "base_bdevs_list": [ 00:17:42.176 { 00:17:42.176 "name": "pt1", 00:17:42.176 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:42.176 "is_configured": true, 00:17:42.176 "data_offset": 256, 00:17:42.176 "data_size": 7936 00:17:42.176 }, 00:17:42.176 { 00:17:42.176 "name": "pt2", 00:17:42.176 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.176 "is_configured": true, 00:17:42.176 "data_offset": 256, 00:17:42.176 "data_size": 7936 00:17:42.176 } 00:17:42.176 ] 00:17:42.176 } 00:17:42.176 } 00:17:42.176 }' 00:17:42.176 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:42.176 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:42.176 pt2' 00:17:42.176 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:42.176 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:42.176 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:42.434 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:42.434 "name": "pt1", 00:17:42.434 "aliases": [ 00:17:42.434 "00000000-0000-0000-0000-000000000001" 00:17:42.434 ], 00:17:42.434 "product_name": "passthru", 00:17:42.434 "block_size": 4128, 00:17:42.434 "num_blocks": 8192, 00:17:42.434 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:42.434 "md_size": 32, 00:17:42.434 "md_interleave": true, 00:17:42.434 "dif_type": 0, 00:17:42.434 "assigned_rate_limits": { 00:17:42.434 "rw_ios_per_sec": 0, 00:17:42.434 "rw_mbytes_per_sec": 0, 00:17:42.434 "r_mbytes_per_sec": 0, 00:17:42.434 "w_mbytes_per_sec": 0 00:17:42.434 }, 00:17:42.434 "claimed": true, 00:17:42.434 "claim_type": "exclusive_write", 00:17:42.434 "zoned": false, 00:17:42.434 "supported_io_types": { 00:17:42.434 "read": true, 00:17:42.434 "write": true, 00:17:42.434 "unmap": true, 00:17:42.434 "flush": true, 00:17:42.434 "reset": true, 00:17:42.434 "nvme_admin": false, 00:17:42.434 "nvme_io": false, 00:17:42.434 "nvme_io_md": false, 00:17:42.434 "write_zeroes": true, 00:17:42.434 "zcopy": true, 00:17:42.434 "get_zone_info": false, 00:17:42.434 "zone_management": false, 00:17:42.434 "zone_append": false, 00:17:42.434 "compare": false, 00:17:42.434 "compare_and_write": false, 00:17:42.434 "abort": true, 00:17:42.434 "seek_hole": false, 00:17:42.434 "seek_data": false, 00:17:42.434 "copy": true, 00:17:42.434 "nvme_iov_md": false 00:17:42.434 }, 00:17:42.434 "memory_domains": [ 00:17:42.434 { 00:17:42.434 "dma_device_id": "system", 00:17:42.434 "dma_device_type": 1 00:17:42.434 }, 00:17:42.434 { 00:17:42.434 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.434 "dma_device_type": 2 00:17:42.434 } 00:17:42.434 ], 00:17:42.434 "driver_specific": { 00:17:42.434 "passthru": { 00:17:42.434 "name": "pt1", 00:17:42.434 "base_bdev_name": "malloc1" 00:17:42.434 } 00:17:42.434 } 00:17:42.434 }' 00:17:42.434 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.434 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.434 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:42.434 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.434 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.434 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:42.434 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.434 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.691 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:42.691 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.691 21:16:53 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.691 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:42.691 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:42.691 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:42.691 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:42.691 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:42.691 "name": "pt2", 00:17:42.691 "aliases": [ 00:17:42.691 "00000000-0000-0000-0000-000000000002" 00:17:42.691 ], 00:17:42.691 "product_name": "passthru", 00:17:42.691 "block_size": 4128, 00:17:42.691 "num_blocks": 8192, 00:17:42.691 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:42.691 "md_size": 32, 00:17:42.691 "md_interleave": true, 00:17:42.691 "dif_type": 0, 00:17:42.691 "assigned_rate_limits": { 00:17:42.691 "rw_ios_per_sec": 0, 00:17:42.691 "rw_mbytes_per_sec": 0, 00:17:42.691 "r_mbytes_per_sec": 0, 00:17:42.691 "w_mbytes_per_sec": 0 00:17:42.691 }, 00:17:42.691 "claimed": true, 00:17:42.691 "claim_type": "exclusive_write", 00:17:42.691 "zoned": false, 00:17:42.691 "supported_io_types": { 00:17:42.691 "read": true, 00:17:42.691 "write": true, 00:17:42.692 "unmap": true, 00:17:42.692 "flush": true, 00:17:42.692 "reset": true, 00:17:42.692 "nvme_admin": false, 00:17:42.692 "nvme_io": false, 00:17:42.692 "nvme_io_md": false, 00:17:42.692 "write_zeroes": true, 00:17:42.692 "zcopy": true, 00:17:42.692 "get_zone_info": false, 00:17:42.692 "zone_management": false, 00:17:42.692 "zone_append": false, 00:17:42.692 "compare": false, 00:17:42.692 "compare_and_write": false, 00:17:42.692 "abort": true, 00:17:42.692 "seek_hole": false, 00:17:42.692 "seek_data": false, 00:17:42.692 "copy": true, 00:17:42.692 "nvme_iov_md": false 00:17:42.692 }, 00:17:42.692 "memory_domains": [ 00:17:42.692 { 00:17:42.692 "dma_device_id": "system", 00:17:42.692 "dma_device_type": 1 00:17:42.692 }, 00:17:42.692 { 00:17:42.692 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:42.692 "dma_device_type": 2 00:17:42.692 } 00:17:42.692 ], 00:17:42.692 "driver_specific": { 00:17:42.692 "passthru": { 00:17:42.692 "name": "pt2", 00:17:42.692 "base_bdev_name": "malloc2" 00:17:42.692 } 00:17:42.692 } 00:17:42.692 }' 00:17:42.692 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.692 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:42.692 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:42.692 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.692 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:42.692 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:42.692 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.949 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:42.949 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:42.949 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.949 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:42.949 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:42.949 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:42.949 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # jq -r '.[] | .uuid' 00:17:43.207 [2024-07-14 21:16:54.533073] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:43.207 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@434 -- # raid_bdev_uuid=642bfe22-4226-11ef-aa83-81fbc7dfef58 00:17:43.207 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@435 -- # '[' -z 642bfe22-4226-11ef-aa83-81fbc7dfef58 ']' 00:17:43.207 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@440 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:43.463 [2024-07-14 21:16:54.785105] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:43.463 [2024-07-14 21:16:54.785119] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:43.463 [2024-07-14 21:16:54.785159] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:43.463 [2024-07-14 21:16:54.785173] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:43.463 [2024-07-14 21:16:54.785177] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc4969234f00 name raid_bdev1, state offline 00:17:43.463 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:43.463 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # jq -r '.[]' 00:17:43.463 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@441 -- # raid_bdev= 00:17:43.463 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@442 -- # '[' -n '' ']' 00:17:43.463 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:43.463 21:16:54 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:43.721 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@447 -- # for i in "${base_bdevs_pt[@]}" 00:17:43.721 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@448 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:43.979 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # jq -r '[.[] | select(.product_name == "passthru")] | any' 00:17:43.979 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs 00:17:44.237 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@450 -- # '[' false == true ']' 00:17:44.237 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@456 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:44.237 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:17:44.237 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:44.237 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.237 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:44.237 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.237 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:44.237 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.237 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:17:44.237 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:44.237 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:17:44.237 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -r raid1 -b 'malloc1 malloc2' -n raid_bdev1 00:17:44.493 [2024-07-14 21:16:55.881125] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc1 is claimed 00:17:44.493 [2024-07-14 21:16:55.881748] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev malloc2 is claimed 00:17:44.493 [2024-07-14 21:16:55.881774] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc1 00:17:44.493 [2024-07-14 21:16:55.881819] bdev_raid.c:3106:raid_bdev_configure_base_bdev_check_sb_cb: *ERROR*: Superblock of a different raid bdev found on bdev malloc2 00:17:44.493 [2024-07-14 21:16:55.881844] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:44.493 [2024-07-14 21:16:55.881848] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc4969234c80 name raid_bdev1, state configuring 00:17:44.493 request: 00:17:44.493 { 00:17:44.493 "name": "raid_bdev1", 00:17:44.493 "raid_level": "raid1", 00:17:44.493 "base_bdevs": [ 00:17:44.493 "malloc1", 00:17:44.493 "malloc2" 00:17:44.493 ], 00:17:44.493 "superblock": false, 00:17:44.493 "method": "bdev_raid_create", 00:17:44.493 "req_id": 1 00:17:44.493 } 00:17:44.493 Got JSON-RPC error response 00:17:44.493 response: 00:17:44.493 { 00:17:44.493 "code": -17, 00:17:44.493 "message": "Failed to create RAID bdev raid_bdev1: File exists" 00:17:44.493 } 00:17:44.493 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:17:44.493 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:17:44.493 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:17:44.493 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:17:44.493 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:44.493 21:16:55 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # jq -r '.[]' 00:17:44.751 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@458 -- # raid_bdev= 00:17:44.751 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@459 -- # '[' -n '' ']' 00:17:44.751 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@464 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:45.009 [2024-07-14 21:16:56.329150] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:45.009 [2024-07-14 21:16:56.329225] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.009 [2024-07-14 21:16:56.329251] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc4969234780 00:17:45.009 [2024-07-14 21:16:56.329258] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.009 [2024-07-14 21:16:56.329942] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.009 [2024-07-14 21:16:56.329979] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:45.009 [2024-07-14 21:16:56.330013] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:45.009 [2024-07-14 21:16:56.330025] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:45.009 pt1 00:17:45.009 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@467 -- # verify_raid_bdev_state raid_bdev1 configuring raid1 0 2 00:17:45.009 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:45.009 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=configuring 00:17:45.009 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:45.009 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:45.009 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:45.009 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:45.009 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:45.009 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:45.009 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:45.009 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.009 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:45.266 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:45.266 "name": "raid_bdev1", 00:17:45.266 "uuid": "642bfe22-4226-11ef-aa83-81fbc7dfef58", 00:17:45.266 "strip_size_kb": 0, 00:17:45.266 "state": "configuring", 00:17:45.266 "raid_level": "raid1", 00:17:45.266 "superblock": true, 00:17:45.266 "num_base_bdevs": 2, 00:17:45.266 "num_base_bdevs_discovered": 1, 00:17:45.266 "num_base_bdevs_operational": 2, 00:17:45.266 "base_bdevs_list": [ 00:17:45.266 { 00:17:45.266 "name": "pt1", 00:17:45.266 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:45.266 "is_configured": true, 00:17:45.266 "data_offset": 256, 00:17:45.266 "data_size": 7936 00:17:45.266 }, 00:17:45.266 { 00:17:45.266 "name": null, 00:17:45.266 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:45.266 "is_configured": false, 00:17:45.266 "data_offset": 256, 00:17:45.266 "data_size": 7936 00:17:45.266 } 00:17:45.266 ] 00:17:45.266 }' 00:17:45.266 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:45.266 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:45.522 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@469 -- # '[' 2 -gt 2 ']' 00:17:45.522 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i = 1 )) 00:17:45.522 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:45.522 21:16:56 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@478 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:45.781 [2024-07-14 21:16:57.145291] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:45.781 [2024-07-14 21:16:57.145350] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:45.781 [2024-07-14 21:16:57.145362] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc4969234f00 00:17:45.781 [2024-07-14 21:16:57.145369] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:45.781 [2024-07-14 21:16:57.145430] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:45.781 [2024-07-14 21:16:57.145440] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:45.781 [2024-07-14 21:16:57.145473] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:45.781 [2024-07-14 21:16:57.145481] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:45.781 [2024-07-14 21:16:57.145535] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc4969235180 00:17:45.781 [2024-07-14 21:16:57.145553] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:45.781 [2024-07-14 21:16:57.145570] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc4969297e20 00:17:45.781 [2024-07-14 21:16:57.145582] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc4969235180 00:17:45.781 [2024-07-14 21:16:57.145586] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc4969235180 00:17:45.781 [2024-07-14 21:16:57.145598] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:45.781 pt2 00:17:45.781 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i++ )) 00:17:45.781 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@477 -- # (( i < num_base_bdevs )) 00:17:45.781 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@482 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:45.781 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:45.781 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:45.781 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:45.781 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:45.781 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:45.781 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:45.781 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:45.781 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:45.781 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:45.781 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:45.781 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:46.038 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:46.038 "name": "raid_bdev1", 00:17:46.038 "uuid": "642bfe22-4226-11ef-aa83-81fbc7dfef58", 00:17:46.038 "strip_size_kb": 0, 00:17:46.038 "state": "online", 00:17:46.038 "raid_level": "raid1", 00:17:46.038 "superblock": true, 00:17:46.038 "num_base_bdevs": 2, 00:17:46.038 "num_base_bdevs_discovered": 2, 00:17:46.038 "num_base_bdevs_operational": 2, 00:17:46.038 "base_bdevs_list": [ 00:17:46.038 { 00:17:46.038 "name": "pt1", 00:17:46.038 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:46.038 "is_configured": true, 00:17:46.038 "data_offset": 256, 00:17:46.038 "data_size": 7936 00:17:46.038 }, 00:17:46.038 { 00:17:46.038 "name": "pt2", 00:17:46.038 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:46.038 "is_configured": true, 00:17:46.038 "data_offset": 256, 00:17:46.038 "data_size": 7936 00:17:46.038 } 00:17:46.038 ] 00:17:46.038 }' 00:17:46.038 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:46.038 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:46.320 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@483 -- # verify_raid_bdev_properties raid_bdev1 00:17:46.320 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@194 -- # local raid_bdev_name=raid_bdev1 00:17:46.320 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@195 -- # local raid_bdev_info 00:17:46.320 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@196 -- # local base_bdev_info 00:17:46.320 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@197 -- # local base_bdev_names 00:17:46.320 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@198 -- # local name 00:17:46.320 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:46.320 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # jq '.[]' 00:17:46.604 [2024-07-14 21:16:57.941511] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:46.604 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@200 -- # raid_bdev_info='{ 00:17:46.604 "name": "raid_bdev1", 00:17:46.604 "aliases": [ 00:17:46.604 "642bfe22-4226-11ef-aa83-81fbc7dfef58" 00:17:46.604 ], 00:17:46.604 "product_name": "Raid Volume", 00:17:46.604 "block_size": 4128, 00:17:46.604 "num_blocks": 7936, 00:17:46.604 "uuid": "642bfe22-4226-11ef-aa83-81fbc7dfef58", 00:17:46.604 "md_size": 32, 00:17:46.604 "md_interleave": true, 00:17:46.604 "dif_type": 0, 00:17:46.604 "assigned_rate_limits": { 00:17:46.604 "rw_ios_per_sec": 0, 00:17:46.604 "rw_mbytes_per_sec": 0, 00:17:46.604 "r_mbytes_per_sec": 0, 00:17:46.604 "w_mbytes_per_sec": 0 00:17:46.604 }, 00:17:46.604 "claimed": false, 00:17:46.604 "zoned": false, 00:17:46.604 "supported_io_types": { 00:17:46.604 "read": true, 00:17:46.604 "write": true, 00:17:46.604 "unmap": false, 00:17:46.604 "flush": false, 00:17:46.604 "reset": true, 00:17:46.604 "nvme_admin": false, 00:17:46.604 "nvme_io": false, 00:17:46.604 "nvme_io_md": false, 00:17:46.604 "write_zeroes": true, 00:17:46.604 "zcopy": false, 00:17:46.604 "get_zone_info": false, 00:17:46.604 "zone_management": false, 00:17:46.604 "zone_append": false, 00:17:46.604 "compare": false, 00:17:46.604 "compare_and_write": false, 00:17:46.604 "abort": false, 00:17:46.604 "seek_hole": false, 00:17:46.604 "seek_data": false, 00:17:46.604 "copy": false, 00:17:46.604 "nvme_iov_md": false 00:17:46.604 }, 00:17:46.604 "memory_domains": [ 00:17:46.604 { 00:17:46.604 "dma_device_id": "system", 00:17:46.604 "dma_device_type": 1 00:17:46.604 }, 00:17:46.604 { 00:17:46.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.604 "dma_device_type": 2 00:17:46.604 }, 00:17:46.604 { 00:17:46.604 "dma_device_id": "system", 00:17:46.604 "dma_device_type": 1 00:17:46.604 }, 00:17:46.604 { 00:17:46.604 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.604 "dma_device_type": 2 00:17:46.604 } 00:17:46.604 ], 00:17:46.604 "driver_specific": { 00:17:46.604 "raid": { 00:17:46.604 "uuid": "642bfe22-4226-11ef-aa83-81fbc7dfef58", 00:17:46.604 "strip_size_kb": 0, 00:17:46.604 "state": "online", 00:17:46.604 "raid_level": "raid1", 00:17:46.604 "superblock": true, 00:17:46.604 "num_base_bdevs": 2, 00:17:46.604 "num_base_bdevs_discovered": 2, 00:17:46.604 "num_base_bdevs_operational": 2, 00:17:46.604 "base_bdevs_list": [ 00:17:46.604 { 00:17:46.604 "name": "pt1", 00:17:46.604 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:46.604 "is_configured": true, 00:17:46.604 "data_offset": 256, 00:17:46.604 "data_size": 7936 00:17:46.604 }, 00:17:46.604 { 00:17:46.604 "name": "pt2", 00:17:46.604 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:46.604 "is_configured": true, 00:17:46.604 "data_offset": 256, 00:17:46.604 "data_size": 7936 00:17:46.604 } 00:17:46.604 ] 00:17:46.604 } 00:17:46.604 } 00:17:46.604 }' 00:17:46.604 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # jq -r '.driver_specific.raid.base_bdevs_list[] | select(.is_configured == true).name' 00:17:46.604 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@201 -- # base_bdev_names='pt1 00:17:46.604 pt2' 00:17:46.604 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:46.604 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt1 00:17:46.604 21:16:57 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:46.862 "name": "pt1", 00:17:46.862 "aliases": [ 00:17:46.862 "00000000-0000-0000-0000-000000000001" 00:17:46.862 ], 00:17:46.862 "product_name": "passthru", 00:17:46.862 "block_size": 4128, 00:17:46.862 "num_blocks": 8192, 00:17:46.862 "uuid": "00000000-0000-0000-0000-000000000001", 00:17:46.862 "md_size": 32, 00:17:46.862 "md_interleave": true, 00:17:46.862 "dif_type": 0, 00:17:46.862 "assigned_rate_limits": { 00:17:46.862 "rw_ios_per_sec": 0, 00:17:46.862 "rw_mbytes_per_sec": 0, 00:17:46.862 "r_mbytes_per_sec": 0, 00:17:46.862 "w_mbytes_per_sec": 0 00:17:46.862 }, 00:17:46.862 "claimed": true, 00:17:46.862 "claim_type": "exclusive_write", 00:17:46.862 "zoned": false, 00:17:46.862 "supported_io_types": { 00:17:46.862 "read": true, 00:17:46.862 "write": true, 00:17:46.862 "unmap": true, 00:17:46.862 "flush": true, 00:17:46.862 "reset": true, 00:17:46.862 "nvme_admin": false, 00:17:46.862 "nvme_io": false, 00:17:46.862 "nvme_io_md": false, 00:17:46.862 "write_zeroes": true, 00:17:46.862 "zcopy": true, 00:17:46.862 "get_zone_info": false, 00:17:46.862 "zone_management": false, 00:17:46.862 "zone_append": false, 00:17:46.862 "compare": false, 00:17:46.862 "compare_and_write": false, 00:17:46.862 "abort": true, 00:17:46.862 "seek_hole": false, 00:17:46.862 "seek_data": false, 00:17:46.862 "copy": true, 00:17:46.862 "nvme_iov_md": false 00:17:46.862 }, 00:17:46.862 "memory_domains": [ 00:17:46.862 { 00:17:46.862 "dma_device_id": "system", 00:17:46.862 "dma_device_type": 1 00:17:46.862 }, 00:17:46.862 { 00:17:46.862 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:46.862 "dma_device_type": 2 00:17:46.862 } 00:17:46.862 ], 00:17:46.862 "driver_specific": { 00:17:46.862 "passthru": { 00:17:46.862 "name": "pt1", 00:17:46.862 "base_bdev_name": "malloc1" 00:17:46.862 } 00:17:46.862 } 00:17:46.862 }' 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@203 -- # for name in $base_bdev_names 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b pt2 00:17:46.862 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # jq '.[]' 00:17:47.119 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@204 -- # base_bdev_info='{ 00:17:47.119 "name": "pt2", 00:17:47.119 "aliases": [ 00:17:47.119 "00000000-0000-0000-0000-000000000002" 00:17:47.119 ], 00:17:47.119 "product_name": "passthru", 00:17:47.119 "block_size": 4128, 00:17:47.119 "num_blocks": 8192, 00:17:47.119 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:47.119 "md_size": 32, 00:17:47.119 "md_interleave": true, 00:17:47.119 "dif_type": 0, 00:17:47.119 "assigned_rate_limits": { 00:17:47.119 "rw_ios_per_sec": 0, 00:17:47.119 "rw_mbytes_per_sec": 0, 00:17:47.119 "r_mbytes_per_sec": 0, 00:17:47.119 "w_mbytes_per_sec": 0 00:17:47.119 }, 00:17:47.119 "claimed": true, 00:17:47.119 "claim_type": "exclusive_write", 00:17:47.119 "zoned": false, 00:17:47.119 "supported_io_types": { 00:17:47.119 "read": true, 00:17:47.119 "write": true, 00:17:47.119 "unmap": true, 00:17:47.119 "flush": true, 00:17:47.119 "reset": true, 00:17:47.119 "nvme_admin": false, 00:17:47.119 "nvme_io": false, 00:17:47.119 "nvme_io_md": false, 00:17:47.119 "write_zeroes": true, 00:17:47.119 "zcopy": true, 00:17:47.119 "get_zone_info": false, 00:17:47.119 "zone_management": false, 00:17:47.119 "zone_append": false, 00:17:47.119 "compare": false, 00:17:47.119 "compare_and_write": false, 00:17:47.119 "abort": true, 00:17:47.119 "seek_hole": false, 00:17:47.119 "seek_data": false, 00:17:47.119 "copy": true, 00:17:47.119 "nvme_iov_md": false 00:17:47.119 }, 00:17:47.119 "memory_domains": [ 00:17:47.119 { 00:17:47.119 "dma_device_id": "system", 00:17:47.119 "dma_device_type": 1 00:17:47.119 }, 00:17:47.119 { 00:17:47.119 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:17:47.119 "dma_device_type": 2 00:17:47.119 } 00:17:47.119 ], 00:17:47.119 "driver_specific": { 00:17:47.119 "passthru": { 00:17:47.119 "name": "pt2", 00:17:47.119 "base_bdev_name": "malloc2" 00:17:47.119 } 00:17:47.119 } 00:17:47.119 }' 00:17:47.119 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:47.119 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # jq .block_size 00:17:47.119 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@205 -- # [[ 4128 == 4128 ]] 00:17:47.119 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:47.119 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # jq .md_size 00:17:47.119 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@206 -- # [[ 32 == 32 ]] 00:17:47.119 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:47.119 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # jq .md_interleave 00:17:47.119 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@207 -- # [[ true == true ]] 00:17:47.119 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:47.119 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # jq .dif_type 00:17:47.119 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@208 -- # [[ 0 == 0 ]] 00:17:47.119 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # jq -r '.[] | .uuid' 00:17:47.119 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:47.376 [2024-07-14 21:16:58.865669] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:47.376 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@486 -- # '[' 642bfe22-4226-11ef-aa83-81fbc7dfef58 '!=' 642bfe22-4226-11ef-aa83-81fbc7dfef58 ']' 00:17:47.376 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@490 -- # has_redundancy raid1 00:17:47.376 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@213 -- # case $1 in 00:17:47.376 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@214 -- # return 0 00:17:47.376 21:16:58 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@492 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt1 00:17:47.941 [2024-07-14 21:16:59.181682] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: pt1 00:17:47.941 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@495 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:47.941 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:47.941 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:47.941 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:47.941 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:47.941 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:47.941 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:47.941 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:47.941 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:47.941 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:47.941 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:47.941 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:47.941 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:47.941 "name": "raid_bdev1", 00:17:47.941 "uuid": "642bfe22-4226-11ef-aa83-81fbc7dfef58", 00:17:47.941 "strip_size_kb": 0, 00:17:47.941 "state": "online", 00:17:47.941 "raid_level": "raid1", 00:17:47.941 "superblock": true, 00:17:47.941 "num_base_bdevs": 2, 00:17:47.941 "num_base_bdevs_discovered": 1, 00:17:47.941 "num_base_bdevs_operational": 1, 00:17:47.941 "base_bdevs_list": [ 00:17:47.941 { 00:17:47.941 "name": null, 00:17:47.941 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:47.941 "is_configured": false, 00:17:47.941 "data_offset": 256, 00:17:47.941 "data_size": 7936 00:17:47.941 }, 00:17:47.941 { 00:17:47.941 "name": "pt2", 00:17:47.941 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:47.941 "is_configured": true, 00:17:47.941 "data_offset": 256, 00:17:47.941 "data_size": 7936 00:17:47.941 } 00:17:47.941 ] 00:17:47.941 }' 00:17:47.941 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:47.941 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:48.198 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@498 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:48.456 [2024-07-14 21:16:59.881716] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:48.456 [2024-07-14 21:16:59.881732] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:48.456 [2024-07-14 21:16:59.881770] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:48.456 [2024-07-14 21:16:59.881781] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:48.456 [2024-07-14 21:16:59.881785] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc4969235180 name raid_bdev1, state offline 00:17:48.456 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # jq -r '.[]' 00:17:48.456 21:16:59 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:48.713 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@499 -- # raid_bdev= 00:17:48.713 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@500 -- # '[' -n '' ']' 00:17:48.713 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i = 1 )) 00:17:48.713 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:48.713 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@506 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete pt2 00:17:48.971 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i++ )) 00:17:48.971 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@505 -- # (( i < num_base_bdevs )) 00:17:48.971 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i = 1 )) 00:17:48.971 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@510 -- # (( i < num_base_bdevs - 1 )) 00:17:48.971 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@518 -- # i=1 00:17:48.971 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@519 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc2 -p pt2 -u 00000000-0000-0000-0000-000000000002 00:17:49.230 [2024-07-14 21:17:00.593774] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc2 00:17:49.230 [2024-07-14 21:17:00.593833] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:49.230 [2024-07-14 21:17:00.593861] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc4969234f00 00:17:49.230 [2024-07-14 21:17:00.593868] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:49.230 [2024-07-14 21:17:00.594562] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:49.230 [2024-07-14 21:17:00.594600] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt2 00:17:49.230 [2024-07-14 21:17:00.594636] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt2 00:17:49.230 [2024-07-14 21:17:00.594648] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:49.230 [2024-07-14 21:17:00.594681] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc4969235180 00:17:49.230 [2024-07-14 21:17:00.594685] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:49.230 [2024-07-14 21:17:00.594720] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc4969297e20 00:17:49.230 [2024-07-14 21:17:00.594733] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc4969235180 00:17:49.230 [2024-07-14 21:17:00.594737] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc4969235180 00:17:49.230 [2024-07-14 21:17:00.594749] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:49.230 pt2 00:17:49.230 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@522 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:49.230 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:49.230 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:49.230 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:49.230 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:49.230 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:49.230 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:49.230 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:49.230 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:49.230 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:49.230 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:49.230 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:49.488 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:49.488 "name": "raid_bdev1", 00:17:49.488 "uuid": "642bfe22-4226-11ef-aa83-81fbc7dfef58", 00:17:49.488 "strip_size_kb": 0, 00:17:49.488 "state": "online", 00:17:49.488 "raid_level": "raid1", 00:17:49.488 "superblock": true, 00:17:49.488 "num_base_bdevs": 2, 00:17:49.488 "num_base_bdevs_discovered": 1, 00:17:49.488 "num_base_bdevs_operational": 1, 00:17:49.488 "base_bdevs_list": [ 00:17:49.488 { 00:17:49.488 "name": null, 00:17:49.488 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:49.488 "is_configured": false, 00:17:49.488 "data_offset": 256, 00:17:49.488 "data_size": 7936 00:17:49.488 }, 00:17:49.488 { 00:17:49.488 "name": "pt2", 00:17:49.488 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:49.488 "is_configured": true, 00:17:49.488 "data_offset": 256, 00:17:49.488 "data_size": 7936 00:17:49.488 } 00:17:49.488 ] 00:17:49.488 }' 00:17:49.488 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:49.488 21:17:00 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:49.746 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@525 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:17:50.004 [2024-07-14 21:17:01.377809] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.004 [2024-07-14 21:17:01.377825] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:17:50.004 [2024-07-14 21:17:01.377845] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:50.004 [2024-07-14 21:17:01.377856] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:50.004 [2024-07-14 21:17:01.377859] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc4969235180 name raid_bdev1, state offline 00:17:50.004 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.004 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # jq -r '.[]' 00:17:50.263 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@526 -- # raid_bdev= 00:17:50.263 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@527 -- # '[' -n '' ']' 00:17:50.263 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@531 -- # '[' 2 -gt 2 ']' 00:17:50.263 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@539 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b malloc1 -p pt1 -u 00000000-0000-0000-0000-000000000001 00:17:50.522 [2024-07-14 21:17:01.917846] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on malloc1 00:17:50.522 [2024-07-14 21:17:01.917917] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:50.522 [2024-07-14 21:17:01.917928] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xc4969234c80 00:17:50.522 [2024-07-14 21:17:01.917935] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:50.522 [2024-07-14 21:17:01.918663] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:50.522 [2024-07-14 21:17:01.918699] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: pt1 00:17:50.522 [2024-07-14 21:17:01.918733] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev pt1 00:17:50.522 [2024-07-14 21:17:01.918777] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt1 is claimed 00:17:50.522 [2024-07-14 21:17:01.918797] bdev_raid.c:3549:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev pt2 (4) greater than existing raid bdev raid_bdev1 (2) 00:17:50.522 [2024-07-14 21:17:01.918801] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:17:50.522 [2024-07-14 21:17:01.918824] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc4969234780 name raid_bdev1, state configuring 00:17:50.522 [2024-07-14 21:17:01.918849] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev pt2 is claimed 00:17:50.522 [2024-07-14 21:17:01.918865] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0xc4969234780 00:17:50.522 [2024-07-14 21:17:01.918868] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:50.522 [2024-07-14 21:17:01.918902] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0xc4969297e20 00:17:50.522 [2024-07-14 21:17:01.918913] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0xc4969234780 00:17:50.522 [2024-07-14 21:17:01.918917] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0xc4969234780 00:17:50.522 [2024-07-14 21:17:01.918926] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:50.522 pt1 00:17:50.522 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@541 -- # '[' 2 -gt 2 ']' 00:17:50.522 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@553 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:50.522 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:50.522 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:50.522 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:50.522 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:50.522 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:50.522 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:50.522 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:50.522 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:50.522 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:50.522 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:50.522 21:17:01 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:50.781 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:50.781 "name": "raid_bdev1", 00:17:50.781 "uuid": "642bfe22-4226-11ef-aa83-81fbc7dfef58", 00:17:50.781 "strip_size_kb": 0, 00:17:50.781 "state": "online", 00:17:50.781 "raid_level": "raid1", 00:17:50.781 "superblock": true, 00:17:50.781 "num_base_bdevs": 2, 00:17:50.781 "num_base_bdevs_discovered": 1, 00:17:50.781 "num_base_bdevs_operational": 1, 00:17:50.781 "base_bdevs_list": [ 00:17:50.781 { 00:17:50.781 "name": null, 00:17:50.781 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:50.781 "is_configured": false, 00:17:50.781 "data_offset": 256, 00:17:50.781 "data_size": 7936 00:17:50.781 }, 00:17:50.781 { 00:17:50.781 "name": "pt2", 00:17:50.781 "uuid": "00000000-0000-0000-0000-000000000002", 00:17:50.781 "is_configured": true, 00:17:50.781 "data_offset": 256, 00:17:50.781 "data_size": 7936 00:17:50.781 } 00:17:50.781 ] 00:17:50.781 }' 00:17:50.781 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:50.781 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.038 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs online 00:17:51.038 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # jq -r '.[].base_bdevs_list[0].is_configured' 00:17:51.296 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@554 -- # [[ false == \f\a\l\s\e ]] 00:17:51.296 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:51.296 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # jq -r '.[] | .uuid' 00:17:51.554 [2024-07-14 21:17:02.941954] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:51.554 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@557 -- # '[' 642bfe22-4226-11ef-aa83-81fbc7dfef58 '!=' 642bfe22-4226-11ef-aa83-81fbc7dfef58 ']' 00:17:51.554 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@562 -- # killprocess 66981 00:17:51.554 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 66981 ']' 00:17:51.554 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 66981 00:17:51.554 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:17:51.554 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:17:51.554 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 66981 00:17:51.554 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:17:51.554 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdev_svc 00:17:51.554 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdev_svc = sudo ']' 00:17:51.554 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 66981' 00:17:51.554 killing process with pid 66981 00:17:51.554 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@967 -- # kill 66981 00:17:51.554 [2024-07-14 21:17:02.968181] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:17:51.554 [2024-07-14 21:17:02.968201] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:17:51.554 [2024-07-14 21:17:02.968212] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:17:51.554 [2024-07-14 21:17:02.968216] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0xc4969234780 name raid_bdev1, state offline 00:17:51.554 21:17:02 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@972 -- # wait 66981 00:17:51.554 [2024-07-14 21:17:02.979750] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:17:51.812 21:17:03 bdev_raid.raid_superblock_test_md_interleaved -- bdev/bdev_raid.sh@564 -- # return 0 00:17:51.813 00:17:51.813 real 0m12.540s 00:17:51.813 user 0m22.424s 00:17:51.813 sys 0m1.927s 00:17:51.813 ************************************ 00:17:51.813 END TEST raid_superblock_test_md_interleaved 00:17:51.813 ************************************ 00:17:51.813 21:17:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:17:51.813 21:17:03 bdev_raid.raid_superblock_test_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.813 21:17:03 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:17:51.813 21:17:03 bdev_raid -- bdev/bdev_raid.sh@914 -- # run_test raid_rebuild_test_sb_md_interleaved raid_rebuild_test raid1 2 true false false 00:17:51.813 21:17:03 bdev_raid -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:17:51.813 21:17:03 bdev_raid -- common/autotest_common.sh@1105 -- # xtrace_disable 00:17:51.813 21:17:03 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:17:51.813 ************************************ 00:17:51.813 START TEST raid_rebuild_test_sb_md_interleaved 00:17:51.813 ************************************ 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1123 -- # raid_rebuild_test raid1 2 true false false 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@568 -- # local raid_level=raid1 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@569 -- # local num_base_bdevs=2 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@570 -- # local superblock=true 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@571 -- # local background_io=false 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@572 -- # local verify=false 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i = 1 )) 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev1 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # echo BaseBdev2 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i++ )) 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # (( i <= num_base_bdevs )) 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # base_bdevs=('BaseBdev1' 'BaseBdev2') 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@573 -- # local base_bdevs 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@574 -- # local raid_bdev_name=raid_bdev1 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@575 -- # local strip_size 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@576 -- # local create_arg 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@577 -- # local raid_bdev_size 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@578 -- # local data_offset 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@580 -- # '[' raid1 '!=' raid1 ']' 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@588 -- # strip_size=0 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@591 -- # '[' true = true ']' 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@592 -- # create_arg+=' -s' 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@596 -- # raid_pid=67368 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@597 -- # waitforlisten 67368 /var/tmp/spdk-raid.sock 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@829 -- # '[' -z 67368 ']' 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-raid.sock 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@595 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -r /var/tmp/spdk-raid.sock -T raid_bdev1 -t 60 -w randrw -M 50 -o 3M -q 2 -U -z -L bdev_raid 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@834 -- # local max_retries=100 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock...' 00:17:51.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-raid.sock... 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@838 -- # xtrace_disable 00:17:51.813 21:17:03 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:51.813 [2024-07-14 21:17:03.201182] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:17:51.813 I/O size of 3145728 is greater than zero copy threshold (65536). 00:17:51.813 Zero copy mechanism will not be used. 00:17:51.813 [2024-07-14 21:17:03.201343] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:17:52.378 EAL: TSC is not safe to use in SMP mode 00:17:52.378 EAL: TSC is not invariant 00:17:52.378 [2024-07-14 21:17:03.710864] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:52.378 [2024-07-14 21:17:03.782491] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:17:52.378 [2024-07-14 21:17:03.784775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.378 [2024-07-14 21:17:03.785804] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.378 [2024-07-14 21:17:03.785816] bdev_raid.c:1416:raid_bdev_get_ctx_size: *DEBUG*: raid_bdev_get_ctx_size 00:17:52.943 21:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:17:52.943 21:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@862 -- # return 0 00:17:52.943 21:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:17:52.943 21:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev1_malloc 00:17:52.943 BaseBdev1_malloc 00:17:53.201 21:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:17:53.201 [2024-07-14 21:17:04.688620] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:17:53.201 [2024-07-14 21:17:04.688658] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.201 [2024-07-14 21:17:04.689368] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b3893a34780 00:17:53.201 [2024-07-14 21:17:04.689396] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.201 [2024-07-14 21:17:04.690162] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.201 [2024-07-14 21:17:04.690203] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:17:53.201 BaseBdev1 00:17:53.201 21:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@600 -- # for bdev in "${base_bdevs[@]}" 00:17:53.201 21:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@601 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b BaseBdev2_malloc 00:17:53.458 BaseBdev2_malloc 00:17:53.458 21:17:04 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@602 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev2_malloc -p BaseBdev2 00:17:53.716 [2024-07-14 21:17:05.180648] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev2_malloc 00:17:53.716 [2024-07-14 21:17:05.180694] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:53.716 [2024-07-14 21:17:05.180735] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b3893a34c80 00:17:53.716 [2024-07-14 21:17:05.180743] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:53.716 [2024-07-14 21:17:05.181383] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:53.716 [2024-07-14 21:17:05.181413] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev2 00:17:53.716 BaseBdev2 00:17:53.716 21:17:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@606 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_malloc_create 32 4096 -m 32 -i -b spare_malloc 00:17:53.975 spare_malloc 00:17:53.975 21:17:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@607 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_delay_create -b spare_malloc -d spare_delay -r 0 -t 0 -w 100000 -n 100000 00:17:54.233 spare_delay 00:17:54.233 21:17:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@608 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:17:54.491 [2024-07-14 21:17:05.848674] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:17:54.491 [2024-07-14 21:17:05.848730] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:17:54.491 [2024-07-14 21:17:05.848768] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b3893a35400 00:17:54.491 [2024-07-14 21:17:05.848776] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:17:54.491 [2024-07-14 21:17:05.849318] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:17:54.492 [2024-07-14 21:17:05.849344] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:17:54.492 spare 00:17:54.492 21:17:05 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@611 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_create -s -r raid1 -b 'BaseBdev1 BaseBdev2' -n raid_bdev1 00:17:54.751 [2024-07-14 21:17:06.108706] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:17:54.751 [2024-07-14 21:17:06.109123] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:17:54.751 [2024-07-14 21:17:06.109213] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3b3893a35680 00:17:54.751 [2024-07-14 21:17:06.109219] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:17:54.751 [2024-07-14 21:17:06.109246] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b3893a97e20 00:17:54.751 [2024-07-14 21:17:06.109260] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3b3893a35680 00:17:54.751 [2024-07-14 21:17:06.109264] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3b3893a35680 00:17:54.751 [2024-07-14 21:17:06.109276] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:54.751 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@612 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:17:54.751 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:54.751 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:54.751 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:54.751 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:54.751 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:17:54.751 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:54.751 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:54.751 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:54.751 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:54.751 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:54.751 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:55.009 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:55.009 "name": "raid_bdev1", 00:17:55.009 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:17:55.009 "strip_size_kb": 0, 00:17:55.009 "state": "online", 00:17:55.009 "raid_level": "raid1", 00:17:55.009 "superblock": true, 00:17:55.009 "num_base_bdevs": 2, 00:17:55.009 "num_base_bdevs_discovered": 2, 00:17:55.009 "num_base_bdevs_operational": 2, 00:17:55.009 "base_bdevs_list": [ 00:17:55.009 { 00:17:55.009 "name": "BaseBdev1", 00:17:55.009 "uuid": "6ef12cdd-488d-275f-87d3-ce799511c3f6", 00:17:55.009 "is_configured": true, 00:17:55.009 "data_offset": 256, 00:17:55.009 "data_size": 7936 00:17:55.009 }, 00:17:55.009 { 00:17:55.009 "name": "BaseBdev2", 00:17:55.009 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:17:55.009 "is_configured": true, 00:17:55.009 "data_offset": 256, 00:17:55.009 "data_size": 7936 00:17:55.009 } 00:17:55.009 ] 00:17:55.009 }' 00:17:55.009 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:55.009 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:55.268 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_get_bdevs -b raid_bdev1 00:17:55.268 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # jq -r '.[].num_blocks' 00:17:55.526 [2024-07-14 21:17:06.876805] bdev_raid.c:1107:raid_bdev_dump_info_json: *DEBUG*: raid_bdev_dump_config_json 00:17:55.526 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@615 -- # raid_bdev_size=7936 00:17:55.526 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:55.526 21:17:06 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # jq -r '.[].base_bdevs_list[0].data_offset' 00:17:55.785 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@618 -- # data_offset=256 00:17:55.785 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@620 -- # '[' false = true ']' 00:17:55.785 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@623 -- # '[' false = true ']' 00:17:55.785 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@639 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev BaseBdev1 00:17:56.044 [2024-07-14 21:17:07.412775] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: BaseBdev1 00:17:56.044 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@642 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:56.044 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:56.044 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:56.044 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:56.044 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:56.044 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:56.044 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:56.044 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:56.044 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:56.044 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:56.044 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:56.044 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:56.302 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:56.302 "name": "raid_bdev1", 00:17:56.302 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:17:56.302 "strip_size_kb": 0, 00:17:56.302 "state": "online", 00:17:56.302 "raid_level": "raid1", 00:17:56.302 "superblock": true, 00:17:56.302 "num_base_bdevs": 2, 00:17:56.302 "num_base_bdevs_discovered": 1, 00:17:56.302 "num_base_bdevs_operational": 1, 00:17:56.302 "base_bdevs_list": [ 00:17:56.302 { 00:17:56.302 "name": null, 00:17:56.302 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:56.302 "is_configured": false, 00:17:56.302 "data_offset": 256, 00:17:56.302 "data_size": 7936 00:17:56.302 }, 00:17:56.302 { 00:17:56.302 "name": "BaseBdev2", 00:17:56.303 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:17:56.303 "is_configured": true, 00:17:56.303 "data_offset": 256, 00:17:56.303 "data_size": 7936 00:17:56.303 } 00:17:56.303 ] 00:17:56.303 }' 00:17:56.303 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:56.303 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:56.561 21:17:07 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@645 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:56.820 [2024-07-14 21:17:08.184854] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:56.820 [2024-07-14 21:17:08.185117] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b3893a97ec0 00:17:56.820 [2024-07-14 21:17:08.186106] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:56.820 21:17:08 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@646 -- # sleep 1 00:17:57.755 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@649 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:17:57.755 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:57.755 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:17:57.755 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:17:57.755 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:57.755 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:57.756 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.014 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:58.014 "name": "raid_bdev1", 00:17:58.014 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:17:58.014 "strip_size_kb": 0, 00:17:58.014 "state": "online", 00:17:58.014 "raid_level": "raid1", 00:17:58.014 "superblock": true, 00:17:58.014 "num_base_bdevs": 2, 00:17:58.014 "num_base_bdevs_discovered": 2, 00:17:58.014 "num_base_bdevs_operational": 2, 00:17:58.014 "process": { 00:17:58.014 "type": "rebuild", 00:17:58.014 "target": "spare", 00:17:58.014 "progress": { 00:17:58.014 "blocks": 3328, 00:17:58.014 "percent": 41 00:17:58.014 } 00:17:58.014 }, 00:17:58.014 "base_bdevs_list": [ 00:17:58.014 { 00:17:58.014 "name": "spare", 00:17:58.014 "uuid": "164d2b46-c3bf-bd5f-bbb3-81dac379167d", 00:17:58.014 "is_configured": true, 00:17:58.014 "data_offset": 256, 00:17:58.014 "data_size": 7936 00:17:58.014 }, 00:17:58.014 { 00:17:58.014 "name": "BaseBdev2", 00:17:58.014 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:17:58.014 "is_configured": true, 00:17:58.014 "data_offset": 256, 00:17:58.014 "data_size": 7936 00:17:58.014 } 00:17:58.014 ] 00:17:58.014 }' 00:17:58.014 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:58.014 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:17:58.014 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:58.014 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:17:58.014 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@652 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:17:58.291 [2024-07-14 21:17:09.793790] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.575 [2024-07-14 21:17:09.893841] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:17:58.575 [2024-07-14 21:17:09.893894] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:17:58.575 [2024-07-14 21:17:09.893915] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:17:58.575 [2024-07-14 21:17:09.893919] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:17:58.575 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@655 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:17:58.575 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:17:58.575 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:17:58.575 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:17:58.575 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:17:58.575 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:17:58.575 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:17:58.575 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:17:58.575 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:17:58.575 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:17:58.575 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:58.575 21:17:09 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:58.838 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:17:58.838 "name": "raid_bdev1", 00:17:58.838 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:17:58.838 "strip_size_kb": 0, 00:17:58.838 "state": "online", 00:17:58.838 "raid_level": "raid1", 00:17:58.838 "superblock": true, 00:17:58.838 "num_base_bdevs": 2, 00:17:58.838 "num_base_bdevs_discovered": 1, 00:17:58.838 "num_base_bdevs_operational": 1, 00:17:58.839 "base_bdevs_list": [ 00:17:58.839 { 00:17:58.839 "name": null, 00:17:58.839 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:58.839 "is_configured": false, 00:17:58.839 "data_offset": 256, 00:17:58.839 "data_size": 7936 00:17:58.839 }, 00:17:58.839 { 00:17:58.839 "name": "BaseBdev2", 00:17:58.839 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:17:58.839 "is_configured": true, 00:17:58.839 "data_offset": 256, 00:17:58.839 "data_size": 7936 00:17:58.839 } 00:17:58.839 ] 00:17:58.839 }' 00:17:58.839 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:17:58.839 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:17:59.097 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@658 -- # verify_raid_bdev_process raid_bdev1 none none 00:17:59.097 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:17:59.097 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:17:59.097 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:17:59.097 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:17:59.097 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:17:59.097 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:17:59.356 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:17:59.356 "name": "raid_bdev1", 00:17:59.356 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:17:59.356 "strip_size_kb": 0, 00:17:59.356 "state": "online", 00:17:59.356 "raid_level": "raid1", 00:17:59.356 "superblock": true, 00:17:59.356 "num_base_bdevs": 2, 00:17:59.356 "num_base_bdevs_discovered": 1, 00:17:59.356 "num_base_bdevs_operational": 1, 00:17:59.356 "base_bdevs_list": [ 00:17:59.356 { 00:17:59.356 "name": null, 00:17:59.356 "uuid": "00000000-0000-0000-0000-000000000000", 00:17:59.356 "is_configured": false, 00:17:59.356 "data_offset": 256, 00:17:59.356 "data_size": 7936 00:17:59.356 }, 00:17:59.356 { 00:17:59.356 "name": "BaseBdev2", 00:17:59.356 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:17:59.356 "is_configured": true, 00:17:59.356 "data_offset": 256, 00:17:59.356 "data_size": 7936 00:17:59.356 } 00:17:59.356 ] 00:17:59.356 }' 00:17:59.356 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:17:59.356 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:17:59.356 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:17:59.356 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:17:59.356 21:17:10 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@661 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:17:59.614 [2024-07-14 21:17:10.993875] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:17:59.615 [2024-07-14 21:17:10.994093] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b3893a97e20 00:17:59.615 [2024-07-14 21:17:10.994991] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:17:59.615 21:17:11 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@662 -- # sleep 1 00:18:00.550 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@663 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.550 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:00.550 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:00.550 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:00.550 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:00.550 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:00.550 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.807 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:00.807 "name": "raid_bdev1", 00:18:00.807 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:00.807 "strip_size_kb": 0, 00:18:00.807 "state": "online", 00:18:00.807 "raid_level": "raid1", 00:18:00.807 "superblock": true, 00:18:00.807 "num_base_bdevs": 2, 00:18:00.807 "num_base_bdevs_discovered": 2, 00:18:00.807 "num_base_bdevs_operational": 2, 00:18:00.807 "process": { 00:18:00.807 "type": "rebuild", 00:18:00.807 "target": "spare", 00:18:00.807 "progress": { 00:18:00.807 "blocks": 3072, 00:18:00.807 "percent": 38 00:18:00.808 } 00:18:00.808 }, 00:18:00.808 "base_bdevs_list": [ 00:18:00.808 { 00:18:00.808 "name": "spare", 00:18:00.808 "uuid": "164d2b46-c3bf-bd5f-bbb3-81dac379167d", 00:18:00.808 "is_configured": true, 00:18:00.808 "data_offset": 256, 00:18:00.808 "data_size": 7936 00:18:00.808 }, 00:18:00.808 { 00:18:00.808 "name": "BaseBdev2", 00:18:00.808 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:00.808 "is_configured": true, 00:18:00.808 "data_offset": 256, 00:18:00.808 "data_size": 7936 00:18:00.808 } 00:18:00.808 ] 00:18:00.808 }' 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' true = true ']' 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@665 -- # '[' = false ']' 00:18:00.808 /home/vagrant/spdk_repo/spdk/test/bdev/bdev_raid.sh: line 665: [: =: unary operator expected 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@690 -- # local num_base_bdevs_operational=2 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' raid1 = raid1 ']' 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@692 -- # '[' 2 -gt 2 ']' 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@705 -- # local timeout=679 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:00.808 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:01.065 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:01.065 "name": "raid_bdev1", 00:18:01.065 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:01.065 "strip_size_kb": 0, 00:18:01.065 "state": "online", 00:18:01.065 "raid_level": "raid1", 00:18:01.065 "superblock": true, 00:18:01.065 "num_base_bdevs": 2, 00:18:01.065 "num_base_bdevs_discovered": 2, 00:18:01.065 "num_base_bdevs_operational": 2, 00:18:01.065 "process": { 00:18:01.065 "type": "rebuild", 00:18:01.065 "target": "spare", 00:18:01.065 "progress": { 00:18:01.065 "blocks": 3840, 00:18:01.065 "percent": 48 00:18:01.065 } 00:18:01.065 }, 00:18:01.065 "base_bdevs_list": [ 00:18:01.065 { 00:18:01.065 "name": "spare", 00:18:01.065 "uuid": "164d2b46-c3bf-bd5f-bbb3-81dac379167d", 00:18:01.065 "is_configured": true, 00:18:01.065 "data_offset": 256, 00:18:01.065 "data_size": 7936 00:18:01.065 }, 00:18:01.065 { 00:18:01.065 "name": "BaseBdev2", 00:18:01.065 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:01.065 "is_configured": true, 00:18:01.065 "data_offset": 256, 00:18:01.065 "data_size": 7936 00:18:01.065 } 00:18:01.065 ] 00:18:01.065 }' 00:18:01.065 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:01.065 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:01.065 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:01.065 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:01.065 21:17:12 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:02.438 21:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:02.438 21:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:02.438 21:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:02.438 21:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:02.438 21:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:02.438 21:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:02.438 21:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:02.438 21:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:02.438 21:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:02.438 "name": "raid_bdev1", 00:18:02.438 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:02.438 "strip_size_kb": 0, 00:18:02.438 "state": "online", 00:18:02.438 "raid_level": "raid1", 00:18:02.438 "superblock": true, 00:18:02.438 "num_base_bdevs": 2, 00:18:02.438 "num_base_bdevs_discovered": 2, 00:18:02.438 "num_base_bdevs_operational": 2, 00:18:02.438 "process": { 00:18:02.438 "type": "rebuild", 00:18:02.438 "target": "spare", 00:18:02.438 "progress": { 00:18:02.438 "blocks": 7168, 00:18:02.438 "percent": 90 00:18:02.438 } 00:18:02.438 }, 00:18:02.438 "base_bdevs_list": [ 00:18:02.438 { 00:18:02.438 "name": "spare", 00:18:02.438 "uuid": "164d2b46-c3bf-bd5f-bbb3-81dac379167d", 00:18:02.438 "is_configured": true, 00:18:02.438 "data_offset": 256, 00:18:02.438 "data_size": 7936 00:18:02.438 }, 00:18:02.438 { 00:18:02.438 "name": "BaseBdev2", 00:18:02.438 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:02.438 "is_configured": true, 00:18:02.438 "data_offset": 256, 00:18:02.438 "data_size": 7936 00:18:02.438 } 00:18:02.438 ] 00:18:02.438 }' 00:18:02.438 21:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:02.438 21:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:02.438 21:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:02.438 21:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:02.438 21:17:13 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@710 -- # sleep 1 00:18:02.696 [2024-07-14 21:17:14.111225] bdev_raid.c:2789:raid_bdev_process_thread_run: *DEBUG*: process completed on raid_bdev1 00:18:02.696 [2024-07-14 21:17:14.111275] bdev_raid.c:2506:raid_bdev_process_finish_done: *NOTICE*: Finished rebuild on raid bdev raid_bdev1 00:18:02.696 [2024-07-14 21:17:14.111351] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:03.629 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@706 -- # (( SECONDS < timeout )) 00:18:03.629 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@707 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:03.629 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:03.629 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:03.629 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:03.629 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:03.629 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:03.629 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.887 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:03.887 "name": "raid_bdev1", 00:18:03.887 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:03.887 "strip_size_kb": 0, 00:18:03.887 "state": "online", 00:18:03.887 "raid_level": "raid1", 00:18:03.887 "superblock": true, 00:18:03.887 "num_base_bdevs": 2, 00:18:03.887 "num_base_bdevs_discovered": 2, 00:18:03.887 "num_base_bdevs_operational": 2, 00:18:03.887 "base_bdevs_list": [ 00:18:03.887 { 00:18:03.887 "name": "spare", 00:18:03.887 "uuid": "164d2b46-c3bf-bd5f-bbb3-81dac379167d", 00:18:03.887 "is_configured": true, 00:18:03.887 "data_offset": 256, 00:18:03.887 "data_size": 7936 00:18:03.887 }, 00:18:03.887 { 00:18:03.887 "name": "BaseBdev2", 00:18:03.887 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:03.887 "is_configured": true, 00:18:03.887 "data_offset": 256, 00:18:03.887 "data_size": 7936 00:18:03.887 } 00:18:03.887 ] 00:18:03.887 }' 00:18:03.887 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:03.887 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \r\e\b\u\i\l\d ]] 00:18:03.887 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:03.887 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \s\p\a\r\e ]] 00:18:03.887 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@708 -- # break 00:18:03.887 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@714 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:03.887 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:03.887 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:03.887 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:03.887 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:03.887 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:03.888 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:04.146 "name": "raid_bdev1", 00:18:04.146 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:04.146 "strip_size_kb": 0, 00:18:04.146 "state": "online", 00:18:04.146 "raid_level": "raid1", 00:18:04.146 "superblock": true, 00:18:04.146 "num_base_bdevs": 2, 00:18:04.146 "num_base_bdevs_discovered": 2, 00:18:04.146 "num_base_bdevs_operational": 2, 00:18:04.146 "base_bdevs_list": [ 00:18:04.146 { 00:18:04.146 "name": "spare", 00:18:04.146 "uuid": "164d2b46-c3bf-bd5f-bbb3-81dac379167d", 00:18:04.146 "is_configured": true, 00:18:04.146 "data_offset": 256, 00:18:04.146 "data_size": 7936 00:18:04.146 }, 00:18:04.146 { 00:18:04.146 "name": "BaseBdev2", 00:18:04.146 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:04.146 "is_configured": true, 00:18:04.146 "data_offset": 256, 00:18:04.146 "data_size": 7936 00:18:04.146 } 00:18:04.146 ] 00:18:04.146 }' 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@715 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.146 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:04.405 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:04.405 "name": "raid_bdev1", 00:18:04.405 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:04.405 "strip_size_kb": 0, 00:18:04.405 "state": "online", 00:18:04.405 "raid_level": "raid1", 00:18:04.405 "superblock": true, 00:18:04.405 "num_base_bdevs": 2, 00:18:04.405 "num_base_bdevs_discovered": 2, 00:18:04.405 "num_base_bdevs_operational": 2, 00:18:04.405 "base_bdevs_list": [ 00:18:04.405 { 00:18:04.405 "name": "spare", 00:18:04.405 "uuid": "164d2b46-c3bf-bd5f-bbb3-81dac379167d", 00:18:04.405 "is_configured": true, 00:18:04.405 "data_offset": 256, 00:18:04.405 "data_size": 7936 00:18:04.405 }, 00:18:04.405 { 00:18:04.405 "name": "BaseBdev2", 00:18:04.405 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:04.405 "is_configured": true, 00:18:04.405 "data_offset": 256, 00:18:04.405 "data_size": 7936 00:18:04.405 } 00:18:04.405 ] 00:18:04.405 }' 00:18:04.405 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:04.405 21:17:15 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:04.663 21:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@718 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_delete raid_bdev1 00:18:04.922 [2024-07-14 21:17:16.411855] bdev_raid.c:2356:raid_bdev_delete: *DEBUG*: delete raid bdev: raid_bdev1 00:18:04.922 [2024-07-14 21:17:16.411879] bdev_raid.c:1844:raid_bdev_deconfigure: *DEBUG*: raid bdev state changing from online to offline 00:18:04.922 [2024-07-14 21:17:16.411903] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:04.922 [2024-07-14 21:17:16.411921] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:04.922 [2024-07-14 21:17:16.411925] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3b3893a35680 name raid_bdev1, state offline 00:18:04.922 21:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:04.922 21:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # jq length 00:18:05.180 21:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@719 -- # [[ 0 == 0 ]] 00:18:05.180 21:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@721 -- # '[' false = true ']' 00:18:05.180 21:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@742 -- # '[' true = true ']' 00:18:05.180 21:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@744 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:05.439 21:17:16 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@745 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:05.697 [2024-07-14 21:17:17.119863] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:05.697 [2024-07-14 21:17:17.119926] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:05.697 [2024-07-14 21:17:17.119965] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b3893a35400 00:18:05.697 [2024-07-14 21:17:17.119973] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:05.697 [2024-07-14 21:17:17.120807] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:05.697 [2024-07-14 21:17:17.120857] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:05.697 [2024-07-14 21:17:17.120882] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:05.697 [2024-07-14 21:17:17.120893] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:05.697 [2024-07-14 21:17:17.120913] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev2 is claimed 00:18:05.697 spare 00:18:05.697 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@747 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 2 00:18:05.697 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:05.697 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:05.697 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:05.697 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:05.697 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=2 00:18:05.697 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:05.697 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:05.697 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:05.697 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:05.697 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:05.697 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:05.697 [2024-07-14 21:17:17.220884] bdev_raid.c:1694:raid_bdev_configure_cont: *DEBUG*: io device register 0x3b3893a35680 00:18:05.697 [2024-07-14 21:17:17.220896] bdev_raid.c:1696:raid_bdev_configure_cont: *DEBUG*: blockcnt 7936, blocklen 4128 00:18:05.697 [2024-07-14 21:17:17.220917] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b3893a97e20 00:18:05.697 [2024-07-14 21:17:17.220930] bdev_raid.c:1724:raid_bdev_configure_cont: *DEBUG*: raid bdev generic 0x3b3893a35680 00:18:05.697 [2024-07-14 21:17:17.220934] bdev_raid.c:1726:raid_bdev_configure_cont: *DEBUG*: raid bdev is created with name raid_bdev1, raid_bdev 0x3b3893a35680 00:18:05.697 [2024-07-14 21:17:17.220944] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:05.956 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:05.956 "name": "raid_bdev1", 00:18:05.956 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:05.956 "strip_size_kb": 0, 00:18:05.956 "state": "online", 00:18:05.956 "raid_level": "raid1", 00:18:05.956 "superblock": true, 00:18:05.956 "num_base_bdevs": 2, 00:18:05.956 "num_base_bdevs_discovered": 2, 00:18:05.956 "num_base_bdevs_operational": 2, 00:18:05.956 "base_bdevs_list": [ 00:18:05.956 { 00:18:05.956 "name": "spare", 00:18:05.956 "uuid": "164d2b46-c3bf-bd5f-bbb3-81dac379167d", 00:18:05.956 "is_configured": true, 00:18:05.956 "data_offset": 256, 00:18:05.956 "data_size": 7936 00:18:05.956 }, 00:18:05.956 { 00:18:05.956 "name": "BaseBdev2", 00:18:05.956 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:05.956 "is_configured": true, 00:18:05.956 "data_offset": 256, 00:18:05.956 "data_size": 7936 00:18:05.956 } 00:18:05.956 ] 00:18:05.956 }' 00:18:05.956 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:05.956 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:06.215 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@748 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:06.215 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:06.215 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:06.215 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:06.215 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:06.215 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.215 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:06.474 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:06.474 "name": "raid_bdev1", 00:18:06.474 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:06.474 "strip_size_kb": 0, 00:18:06.474 "state": "online", 00:18:06.474 "raid_level": "raid1", 00:18:06.474 "superblock": true, 00:18:06.474 "num_base_bdevs": 2, 00:18:06.474 "num_base_bdevs_discovered": 2, 00:18:06.474 "num_base_bdevs_operational": 2, 00:18:06.474 "base_bdevs_list": [ 00:18:06.474 { 00:18:06.474 "name": "spare", 00:18:06.474 "uuid": "164d2b46-c3bf-bd5f-bbb3-81dac379167d", 00:18:06.474 "is_configured": true, 00:18:06.474 "data_offset": 256, 00:18:06.474 "data_size": 7936 00:18:06.474 }, 00:18:06.474 { 00:18:06.474 "name": "BaseBdev2", 00:18:06.474 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:06.474 "is_configured": true, 00:18:06.474 "data_offset": 256, 00:18:06.474 "data_size": 7936 00:18:06.474 } 00:18:06.474 ] 00:18:06.474 }' 00:18:06.474 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:06.474 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:06.474 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:06.474 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:06.474 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # jq -r '.[].base_bdevs_list[0].name' 00:18:06.474 21:17:17 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.733 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@749 -- # [[ spare == \s\p\a\r\e ]] 00:18:06.733 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@752 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_remove_base_bdev spare 00:18:06.992 [2024-07-14 21:17:18.427873] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:06.992 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@753 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:06.993 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:06.993 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:06.993 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:06.993 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:06.993 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:06.993 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:06.993 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:06.993 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:06.993 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:06.993 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:06.993 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:07.251 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:07.251 "name": "raid_bdev1", 00:18:07.251 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:07.251 "strip_size_kb": 0, 00:18:07.251 "state": "online", 00:18:07.251 "raid_level": "raid1", 00:18:07.251 "superblock": true, 00:18:07.251 "num_base_bdevs": 2, 00:18:07.251 "num_base_bdevs_discovered": 1, 00:18:07.251 "num_base_bdevs_operational": 1, 00:18:07.251 "base_bdevs_list": [ 00:18:07.251 { 00:18:07.251 "name": null, 00:18:07.251 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:07.251 "is_configured": false, 00:18:07.251 "data_offset": 256, 00:18:07.251 "data_size": 7936 00:18:07.251 }, 00:18:07.251 { 00:18:07.251 "name": "BaseBdev2", 00:18:07.251 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:07.251 "is_configured": true, 00:18:07.251 "data_offset": 256, 00:18:07.251 "data_size": 7936 00:18:07.251 } 00:18:07.251 ] 00:18:07.251 }' 00:18:07.251 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:07.251 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:07.509 21:17:18 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@754 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 spare 00:18:07.767 [2024-07-14 21:17:19.207884] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.767 [2024-07-14 21:17:19.207912] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:07.767 [2024-07-14 21:17:19.207917] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:07.767 [2024-07-14 21:17:19.207940] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:07.767 [2024-07-14 21:17:19.208206] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b3893a97ec0 00:18:07.767 [2024-07-14 21:17:19.208925] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:07.767 21:17:19 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@755 -- # sleep 1 00:18:09.140 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@756 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:09.140 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:09.140 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:09.140 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:09.140 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:09.140 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.140 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.140 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:09.140 "name": "raid_bdev1", 00:18:09.140 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:09.140 "strip_size_kb": 0, 00:18:09.140 "state": "online", 00:18:09.140 "raid_level": "raid1", 00:18:09.140 "superblock": true, 00:18:09.140 "num_base_bdevs": 2, 00:18:09.140 "num_base_bdevs_discovered": 2, 00:18:09.140 "num_base_bdevs_operational": 2, 00:18:09.140 "process": { 00:18:09.140 "type": "rebuild", 00:18:09.140 "target": "spare", 00:18:09.140 "progress": { 00:18:09.140 "blocks": 3328, 00:18:09.140 "percent": 41 00:18:09.140 } 00:18:09.140 }, 00:18:09.140 "base_bdevs_list": [ 00:18:09.140 { 00:18:09.140 "name": "spare", 00:18:09.140 "uuid": "164d2b46-c3bf-bd5f-bbb3-81dac379167d", 00:18:09.140 "is_configured": true, 00:18:09.140 "data_offset": 256, 00:18:09.140 "data_size": 7936 00:18:09.140 }, 00:18:09.140 { 00:18:09.140 "name": "BaseBdev2", 00:18:09.140 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:09.140 "is_configured": true, 00:18:09.140 "data_offset": 256, 00:18:09.140 "data_size": 7936 00:18:09.140 } 00:18:09.140 ] 00:18:09.140 }' 00:18:09.140 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:09.140 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:09.140 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:09.140 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:09.140 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@759 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:09.398 [2024-07-14 21:17:20.809880] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.398 [2024-07-14 21:17:20.817820] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:09.398 [2024-07-14 21:17:20.817860] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:09.398 [2024-07-14 21:17:20.817865] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:09.398 [2024-07-14 21:17:20.817869] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:18:09.398 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@760 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:09.398 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:09.398 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:09.398 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:09.398 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:09.398 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:09.398 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:09.398 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:09.398 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:09.398 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:09.398 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:09.398 21:17:20 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:09.655 21:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:09.655 "name": "raid_bdev1", 00:18:09.655 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:09.655 "strip_size_kb": 0, 00:18:09.655 "state": "online", 00:18:09.655 "raid_level": "raid1", 00:18:09.655 "superblock": true, 00:18:09.655 "num_base_bdevs": 2, 00:18:09.655 "num_base_bdevs_discovered": 1, 00:18:09.655 "num_base_bdevs_operational": 1, 00:18:09.655 "base_bdevs_list": [ 00:18:09.655 { 00:18:09.655 "name": null, 00:18:09.655 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:09.655 "is_configured": false, 00:18:09.655 "data_offset": 256, 00:18:09.655 "data_size": 7936 00:18:09.655 }, 00:18:09.655 { 00:18:09.655 "name": "BaseBdev2", 00:18:09.655 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:09.655 "is_configured": true, 00:18:09.655 "data_offset": 256, 00:18:09.655 "data_size": 7936 00:18:09.655 } 00:18:09.655 ] 00:18:09.655 }' 00:18:09.655 21:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:09.655 21:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:09.913 21:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@761 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b spare_delay -p spare 00:18:10.171 [2024-07-14 21:17:21.605879] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on spare_delay 00:18:10.171 [2024-07-14 21:17:21.605905] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:10.171 [2024-07-14 21:17:21.605940] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b3893a35400 00:18:10.171 [2024-07-14 21:17:21.605947] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:10.171 [2024-07-14 21:17:21.605994] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:10.171 [2024-07-14 21:17:21.606004] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: spare 00:18:10.171 [2024-07-14 21:17:21.606018] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev spare 00:18:10.171 [2024-07-14 21:17:21.606022] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev spare (4) smaller than existing raid bdev raid_bdev1 (5) 00:18:10.171 [2024-07-14 21:17:21.606026] bdev_raid.c:3620:raid_bdev_examine_sb: *NOTICE*: Re-adding bdev spare to raid bdev raid_bdev1. 00:18:10.171 [2024-07-14 21:17:21.606035] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev spare is claimed 00:18:10.171 [2024-07-14 21:17:21.606263] bdev_raid.c: 251:raid_bdev_create_cb: *DEBUG*: raid_bdev_create_cb, 0x3b3893a97e20 00:18:10.171 [2024-07-14 21:17:21.606910] bdev_raid.c:2825:raid_bdev_process_thread_init: *NOTICE*: Started rebuild on raid bdev raid_bdev1 00:18:10.171 spare 00:18:10.171 21:17:21 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@762 -- # sleep 1 00:18:11.545 21:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@763 -- # verify_raid_bdev_process raid_bdev1 rebuild spare 00:18:11.545 21:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:11.545 21:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=rebuild 00:18:11.545 21:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=spare 00:18:11.545 21:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:11.545 21:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.545 21:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:11.545 21:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:11.545 "name": "raid_bdev1", 00:18:11.545 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:11.545 "strip_size_kb": 0, 00:18:11.545 "state": "online", 00:18:11.545 "raid_level": "raid1", 00:18:11.545 "superblock": true, 00:18:11.545 "num_base_bdevs": 2, 00:18:11.545 "num_base_bdevs_discovered": 2, 00:18:11.545 "num_base_bdevs_operational": 2, 00:18:11.545 "process": { 00:18:11.545 "type": "rebuild", 00:18:11.545 "target": "spare", 00:18:11.545 "progress": { 00:18:11.545 "blocks": 3328, 00:18:11.545 "percent": 41 00:18:11.545 } 00:18:11.545 }, 00:18:11.545 "base_bdevs_list": [ 00:18:11.545 { 00:18:11.545 "name": "spare", 00:18:11.546 "uuid": "164d2b46-c3bf-bd5f-bbb3-81dac379167d", 00:18:11.546 "is_configured": true, 00:18:11.546 "data_offset": 256, 00:18:11.546 "data_size": 7936 00:18:11.546 }, 00:18:11.546 { 00:18:11.546 "name": "BaseBdev2", 00:18:11.546 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:11.546 "is_configured": true, 00:18:11.546 "data_offset": 256, 00:18:11.546 "data_size": 7936 00:18:11.546 } 00:18:11.546 ] 00:18:11.546 }' 00:18:11.546 21:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:11.546 21:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ rebuild == \r\e\b\u\i\l\d ]] 00:18:11.546 21:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:11.546 21:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ spare == \s\p\a\r\e ]] 00:18:11.546 21:17:22 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@766 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete spare 00:18:11.817 [2024-07-14 21:17:23.173086] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.818 [2024-07-14 21:17:23.213030] bdev_raid.c:2516:raid_bdev_process_finish_done: *WARNING*: Finished rebuild on raid bdev raid_bdev1: Operation not supported by device 00:18:11.818 [2024-07-14 21:17:23.213070] bdev_raid.c: 331:raid_bdev_destroy_cb: *DEBUG*: raid_bdev_destroy_cb 00:18:11.818 [2024-07-14 21:17:23.213075] bdev_raid.c:2120:_raid_bdev_remove_base_bdev: *DEBUG*: spare 00:18:11.818 [2024-07-14 21:17:23.213078] bdev_raid.c:2451:raid_bdev_process_finish_target_removed: *ERROR*: Failed to remove target bdev: Operation not supported by device 00:18:11.818 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@767 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:11.818 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:11.818 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:11.818 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:11.818 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:11.818 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:11.818 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:11.818 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:11.818 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:11.818 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:11.818 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:11.818 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.077 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:12.077 "name": "raid_bdev1", 00:18:12.077 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:12.077 "strip_size_kb": 0, 00:18:12.077 "state": "online", 00:18:12.077 "raid_level": "raid1", 00:18:12.077 "superblock": true, 00:18:12.077 "num_base_bdevs": 2, 00:18:12.077 "num_base_bdevs_discovered": 1, 00:18:12.077 "num_base_bdevs_operational": 1, 00:18:12.077 "base_bdevs_list": [ 00:18:12.077 { 00:18:12.077 "name": null, 00:18:12.077 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.077 "is_configured": false, 00:18:12.077 "data_offset": 256, 00:18:12.077 "data_size": 7936 00:18:12.077 }, 00:18:12.077 { 00:18:12.077 "name": "BaseBdev2", 00:18:12.077 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:12.077 "is_configured": true, 00:18:12.077 "data_offset": 256, 00:18:12.077 "data_size": 7936 00:18:12.077 } 00:18:12.077 ] 00:18:12.077 }' 00:18:12.077 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:12.077 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:12.335 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@768 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:12.335 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:12.335 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:12.335 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:12.335 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:12.335 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:12.335 21:17:23 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:12.633 21:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:12.633 "name": "raid_bdev1", 00:18:12.633 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:12.633 "strip_size_kb": 0, 00:18:12.633 "state": "online", 00:18:12.633 "raid_level": "raid1", 00:18:12.633 "superblock": true, 00:18:12.633 "num_base_bdevs": 2, 00:18:12.633 "num_base_bdevs_discovered": 1, 00:18:12.633 "num_base_bdevs_operational": 1, 00:18:12.633 "base_bdevs_list": [ 00:18:12.633 { 00:18:12.633 "name": null, 00:18:12.633 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:12.633 "is_configured": false, 00:18:12.633 "data_offset": 256, 00:18:12.633 "data_size": 7936 00:18:12.633 }, 00:18:12.633 { 00:18:12.633 "name": "BaseBdev2", 00:18:12.633 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:12.633 "is_configured": true, 00:18:12.633 "data_offset": 256, 00:18:12.633 "data_size": 7936 00:18:12.633 } 00:18:12.633 ] 00:18:12.633 }' 00:18:12.633 21:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:12.633 21:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:12.633 21:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:12.633 21:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:12.633 21:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@771 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_delete BaseBdev1 00:18:12.914 21:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@772 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_passthru_create -b BaseBdev1_malloc -p BaseBdev1 00:18:13.172 [2024-07-14 21:17:24.505091] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on BaseBdev1_malloc 00:18:13.172 [2024-07-14 21:17:24.505131] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:18:13.172 [2024-07-14 21:17:24.505175] vbdev_passthru.c: 680:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x3b3893a34780 00:18:13.172 [2024-07-14 21:17:24.505182] vbdev_passthru.c: 695:vbdev_passthru_register: *NOTICE*: bdev claimed 00:18:13.172 [2024-07-14 21:17:24.505224] vbdev_passthru.c: 708:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:18:13.172 [2024-07-14 21:17:24.505232] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: created pt_bdev for: BaseBdev1 00:18:13.172 [2024-07-14 21:17:24.505245] bdev_raid.c:3752:raid_bdev_examine_cont: *DEBUG*: raid superblock found on bdev BaseBdev1 00:18:13.172 [2024-07-14 21:17:24.505250] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:13.172 [2024-07-14 21:17:24.505253] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:13.172 BaseBdev1 00:18:13.172 21:17:24 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@773 -- # sleep 1 00:18:14.106 21:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@774 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:14.106 21:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:14.106 21:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:14.106 21:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:14.106 21:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:14.106 21:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:14.106 21:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:14.106 21:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:14.106 21:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:14.106 21:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:14.106 21:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.106 21:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.365 21:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:14.365 "name": "raid_bdev1", 00:18:14.365 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:14.365 "strip_size_kb": 0, 00:18:14.365 "state": "online", 00:18:14.365 "raid_level": "raid1", 00:18:14.365 "superblock": true, 00:18:14.365 "num_base_bdevs": 2, 00:18:14.365 "num_base_bdevs_discovered": 1, 00:18:14.365 "num_base_bdevs_operational": 1, 00:18:14.365 "base_bdevs_list": [ 00:18:14.365 { 00:18:14.365 "name": null, 00:18:14.365 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.365 "is_configured": false, 00:18:14.365 "data_offset": 256, 00:18:14.365 "data_size": 7936 00:18:14.365 }, 00:18:14.365 { 00:18:14.365 "name": "BaseBdev2", 00:18:14.365 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:14.365 "is_configured": true, 00:18:14.365 "data_offset": 256, 00:18:14.365 "data_size": 7936 00:18:14.365 } 00:18:14.365 ] 00:18:14.365 }' 00:18:14.365 21:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:14.365 21:17:25 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:14.623 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@775 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:14.623 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:14.623 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:14.623 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:14.623 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:14.623 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:14.623 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:14.882 "name": "raid_bdev1", 00:18:14.882 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:14.882 "strip_size_kb": 0, 00:18:14.882 "state": "online", 00:18:14.882 "raid_level": "raid1", 00:18:14.882 "superblock": true, 00:18:14.882 "num_base_bdevs": 2, 00:18:14.882 "num_base_bdevs_discovered": 1, 00:18:14.882 "num_base_bdevs_operational": 1, 00:18:14.882 "base_bdevs_list": [ 00:18:14.882 { 00:18:14.882 "name": null, 00:18:14.882 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:14.882 "is_configured": false, 00:18:14.882 "data_offset": 256, 00:18:14.882 "data_size": 7936 00:18:14.882 }, 00:18:14.882 { 00:18:14.882 "name": "BaseBdev2", 00:18:14.882 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:14.882 "is_configured": true, 00:18:14.882 "data_offset": 256, 00:18:14.882 "data_size": 7936 00:18:14.882 } 00:18:14.882 ] 00:18:14.882 }' 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@776 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@648 -- # local es=0 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@650 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@636 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@642 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:18:14.882 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_add_base_bdev raid_bdev1 BaseBdev1 00:18:15.449 [2024-07-14 21:17:26.705120] bdev_raid.c:3198:raid_bdev_configure_base_bdev: *DEBUG*: bdev BaseBdev1 is claimed 00:18:15.450 [2024-07-14 21:17:26.705149] bdev_raid.c:3564:raid_bdev_examine_sb: *DEBUG*: raid superblock seq_number on bdev BaseBdev1 (1) smaller than existing raid bdev raid_bdev1 (5) 00:18:15.450 [2024-07-14 21:17:26.705153] bdev_raid.c:3581:raid_bdev_examine_sb: *DEBUG*: raid superblock does not contain this bdev's uuid 00:18:15.450 request: 00:18:15.450 { 00:18:15.450 "base_bdev": "BaseBdev1", 00:18:15.450 "raid_bdev": "raid_bdev1", 00:18:15.450 "method": "bdev_raid_add_base_bdev", 00:18:15.450 "req_id": 1 00:18:15.450 } 00:18:15.450 Got JSON-RPC error response 00:18:15.450 response: 00:18:15.450 { 00:18:15.450 "code": -22, 00:18:15.450 "message": "Failed to add base bdev to RAID bdev: Invalid argument" 00:18:15.450 } 00:18:15.450 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@651 -- # es=1 00:18:15.450 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:15.450 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:15.450 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:15.450 21:17:26 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@777 -- # sleep 1 00:18:16.386 21:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@778 -- # verify_raid_bdev_state raid_bdev1 online raid1 0 1 00:18:16.386 21:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@116 -- # local raid_bdev_name=raid_bdev1 00:18:16.386 21:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@117 -- # local expected_state=online 00:18:16.386 21:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@118 -- # local raid_level=raid1 00:18:16.386 21:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@119 -- # local strip_size=0 00:18:16.386 21:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@120 -- # local num_base_bdevs_operational=1 00:18:16.386 21:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@121 -- # local raid_bdev_info 00:18:16.386 21:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@122 -- # local num_base_bdevs 00:18:16.386 21:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@123 -- # local num_base_bdevs_discovered 00:18:16.386 21:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@124 -- # local tmp 00:18:16.386 21:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.386 21:17:27 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:16.645 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@126 -- # raid_bdev_info='{ 00:18:16.645 "name": "raid_bdev1", 00:18:16.645 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:16.645 "strip_size_kb": 0, 00:18:16.645 "state": "online", 00:18:16.645 "raid_level": "raid1", 00:18:16.645 "superblock": true, 00:18:16.645 "num_base_bdevs": 2, 00:18:16.645 "num_base_bdevs_discovered": 1, 00:18:16.645 "num_base_bdevs_operational": 1, 00:18:16.645 "base_bdevs_list": [ 00:18:16.645 { 00:18:16.645 "name": null, 00:18:16.645 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:16.645 "is_configured": false, 00:18:16.645 "data_offset": 256, 00:18:16.645 "data_size": 7936 00:18:16.645 }, 00:18:16.645 { 00:18:16.645 "name": "BaseBdev2", 00:18:16.645 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:16.645 "is_configured": true, 00:18:16.645 "data_offset": 256, 00:18:16.645 "data_size": 7936 00:18:16.645 } 00:18:16.645 ] 00:18:16.645 }' 00:18:16.645 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@128 -- # xtrace_disable 00:18:16.645 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:16.904 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@779 -- # verify_raid_bdev_process raid_bdev1 none none 00:18:16.904 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@182 -- # local raid_bdev_name=raid_bdev1 00:18:16.904 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@183 -- # local process_type=none 00:18:16.904 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@184 -- # local target=none 00:18:16.904 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@185 -- # local raid_bdev_info 00:18:16.904 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-raid.sock bdev_raid_get_bdevs all 00:18:16.904 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # jq -r '.[] | select(.name == "raid_bdev1")' 00:18:17.162 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@187 -- # raid_bdev_info='{ 00:18:17.162 "name": "raid_bdev1", 00:18:17.162 "uuid": "6c120b14-4226-11ef-aa83-81fbc7dfef58", 00:18:17.162 "strip_size_kb": 0, 00:18:17.162 "state": "online", 00:18:17.162 "raid_level": "raid1", 00:18:17.162 "superblock": true, 00:18:17.162 "num_base_bdevs": 2, 00:18:17.162 "num_base_bdevs_discovered": 1, 00:18:17.162 "num_base_bdevs_operational": 1, 00:18:17.162 "base_bdevs_list": [ 00:18:17.162 { 00:18:17.162 "name": null, 00:18:17.162 "uuid": "00000000-0000-0000-0000-000000000000", 00:18:17.162 "is_configured": false, 00:18:17.163 "data_offset": 256, 00:18:17.163 "data_size": 7936 00:18:17.163 }, 00:18:17.163 { 00:18:17.163 "name": "BaseBdev2", 00:18:17.163 "uuid": "abf09faf-29cb-8658-b875-99bcd13e78af", 00:18:17.163 "is_configured": true, 00:18:17.163 "data_offset": 256, 00:18:17.163 "data_size": 7936 00:18:17.163 } 00:18:17.163 ] 00:18:17.163 }' 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # jq -r '.process.type // "none"' 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@189 -- # [[ none == \n\o\n\e ]] 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # jq -r '.process.target // "none"' 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@190 -- # [[ none == \n\o\n\e ]] 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@782 -- # killprocess 67368 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@948 -- # '[' -z 67368 ']' 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@952 -- # kill -0 67368 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # uname 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # ps -c -o command 67368 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # tail -1 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@956 -- # process_name=bdevperf 00:18:17.163 killing process with pid 67368 00:18:17.163 Received shutdown signal, test time was about 60.000000 seconds 00:18:17.163 00:18:17.163 Latency(us) 00:18:17.163 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.163 =================================================================================================================== 00:18:17.163 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@958 -- # '[' bdevperf = sudo ']' 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@966 -- # echo 'killing process with pid 67368' 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@967 -- # kill 67368 00:18:17.163 [2024-07-14 21:17:28.642649] bdev_raid.c:1358:raid_bdev_fini_start: *DEBUG*: raid_bdev_fini_start 00:18:17.163 [2024-07-14 21:17:28.642675] bdev_raid.c: 474:_raid_bdev_destruct: *DEBUG*: raid_bdev_destruct 00:18:17.163 [2024-07-14 21:17:28.642685] bdev_raid.c: 451:raid_bdev_io_device_unregister_cb: *DEBUG*: raid bdev base bdevs is 0, going to free all in destruct 00:18:17.163 [2024-07-14 21:17:28.642689] bdev_raid.c: 367:raid_bdev_cleanup: *DEBUG*: raid_bdev_cleanup, 0x3b3893a35680 name raid_bdev1, state offline 00:18:17.163 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@972 -- # wait 67368 00:18:17.163 [2024-07-14 21:17:28.666924] bdev_raid.c:1375:raid_bdev_exit: *DEBUG*: raid_bdev_exit 00:18:17.422 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- bdev/bdev_raid.sh@784 -- # return 0 00:18:17.422 00:18:17.422 real 0m25.694s 00:18:17.422 user 0m39.492s 00:18:17.422 sys 0m2.442s 00:18:17.422 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:17.422 21:17:28 bdev_raid.raid_rebuild_test_sb_md_interleaved -- common/autotest_common.sh@10 -- # set +x 00:18:17.422 ************************************ 00:18:17.422 END TEST raid_rebuild_test_sb_md_interleaved 00:18:17.422 ************************************ 00:18:17.422 21:17:28 bdev_raid -- common/autotest_common.sh@1142 -- # return 0 00:18:17.422 21:17:28 bdev_raid -- bdev/bdev_raid.sh@916 -- # trap - EXIT 00:18:17.422 21:17:28 bdev_raid -- bdev/bdev_raid.sh@917 -- # cleanup 00:18:17.422 21:17:28 bdev_raid -- bdev/bdev_raid.sh@58 -- # '[' -n 67368 ']' 00:18:17.422 21:17:28 bdev_raid -- bdev/bdev_raid.sh@58 -- # ps -p 67368 00:18:17.422 21:17:28 bdev_raid -- bdev/bdev_raid.sh@62 -- # rm -rf /raidtest 00:18:17.422 00:18:17.422 real 11m5.818s 00:18:17.422 user 19m14.498s 00:18:17.422 sys 1m45.092s 00:18:17.422 21:17:28 bdev_raid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:17.422 ************************************ 00:18:17.422 END TEST bdev_raid 00:18:17.422 ************************************ 00:18:17.423 21:17:28 bdev_raid -- common/autotest_common.sh@10 -- # set +x 00:18:17.682 21:17:28 -- common/autotest_common.sh@1142 -- # return 0 00:18:17.682 21:17:28 -- spdk/autotest.sh@191 -- # run_test bdevperf_config /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:18:17.682 21:17:28 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:17.682 21:17:28 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:17.682 21:17:28 -- common/autotest_common.sh@10 -- # set +x 00:18:17.682 ************************************ 00:18:17.682 START TEST bdevperf_config 00:18:17.682 ************************************ 00:18:17.682 21:17:28 bdevperf_config -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test_config.sh 00:18:17.682 * Looking for test storage... 00:18:17.682 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/test_config.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/common.sh 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@5 -- # bdevperf=/home/vagrant/spdk_repo/spdk/build/examples/bdevperf 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/test_config.sh@12 -- # jsonconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/test_config.sh@13 -- # testconf=/home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/test_config.sh@15 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/test_config.sh@17 -- # create_job global read Malloc0 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=read 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:18:17.682 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/test_config.sh@18 -- # create_job job0 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:17.682 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/test_config.sh@19 -- # create_job job1 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:17.682 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/test_config.sh@20 -- # create_job job2 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:17.682 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/test_config.sh@21 -- # create_job job3 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:18:17.682 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:17.682 21:17:29 bdevperf_config -- bdevperf/test_config.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:20.971 21:17:32 bdevperf_config -- bdevperf/test_config.sh@22 -- # bdevperf_output='[2024-07-14 21:17:29.164586] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:20.971 [2024-07-14 21:17:29.164773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:20.971 Using job config with 4 jobs 00:18:20.971 EAL: TSC is not safe to use in SMP mode 00:18:20.971 EAL: TSC is not invariant 00:18:20.971 [2024-07-14 21:17:29.701496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.971 [2024-07-14 21:17:29.786919] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:20.971 [2024-07-14 21:17:29.789216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.971 cpumask for '\''job0'\'' is too big 00:18:20.971 cpumask for '\''job1'\'' is too big 00:18:20.971 cpumask for '\''job2'\'' is too big 00:18:20.971 cpumask for '\''job3'\'' is too big 00:18:20.971 Running I/O for 2 seconds... 00:18:20.971 00:18:20.971 Latency(us) 00:18:20.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.972 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:20.972 Malloc0 : 2.00 362387.00 353.89 0.00 0.00 706.18 214.11 1511.80 00:18:20.972 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:20.972 Malloc0 : 2.00 362368.71 353.88 0.00 0.00 706.07 161.05 1288.38 00:18:20.972 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:20.972 Malloc0 : 2.00 362418.21 353.92 0.00 0.00 705.83 187.11 1176.67 00:18:20.972 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:20.972 Malloc0 : 2.00 362401.74 353.91 0.00 0.00 705.69 144.29 1184.12 00:18:20.972 =================================================================================================================== 00:18:20.972 Total : 1449575.66 1415.60 0.00 0.00 705.94 144.29 1511.80' 00:18:20.972 21:17:32 bdevperf_config -- bdevperf/test_config.sh@23 -- # get_num_jobs '[2024-07-14 21:17:29.164586] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:20.972 [2024-07-14 21:17:29.164773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:20.972 Using job config with 4 jobs 00:18:20.972 EAL: TSC is not safe to use in SMP mode 00:18:20.972 EAL: TSC is not invariant 00:18:20.972 [2024-07-14 21:17:29.701496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.972 [2024-07-14 21:17:29.786919] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:20.972 [2024-07-14 21:17:29.789216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.972 cpumask for '\''job0'\'' is too big 00:18:20.972 cpumask for '\''job1'\'' is too big 00:18:20.972 cpumask for '\''job2'\'' is too big 00:18:20.972 cpumask for '\''job3'\'' is too big 00:18:20.972 Running I/O for 2 seconds... 00:18:20.972 00:18:20.972 Latency(us) 00:18:20.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.972 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:20.972 Malloc0 : 2.00 362387.00 353.89 0.00 0.00 706.18 214.11 1511.80 00:18:20.972 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:20.972 Malloc0 : 2.00 362368.71 353.88 0.00 0.00 706.07 161.05 1288.38 00:18:20.972 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:20.972 Malloc0 : 2.00 362418.21 353.92 0.00 0.00 705.83 187.11 1176.67 00:18:20.972 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:20.972 Malloc0 : 2.00 362401.74 353.91 0.00 0.00 705.69 144.29 1184.12 00:18:20.972 =================================================================================================================== 00:18:20.972 Total : 1449575.66 1415.60 0.00 0.00 705.94 144.29 1511.80' 00:18:20.972 21:17:32 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-14 21:17:29.164586] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:20.972 [2024-07-14 21:17:29.164773] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:20.972 Using job config with 4 jobs 00:18:20.972 EAL: TSC is not safe to use in SMP mode 00:18:20.972 EAL: TSC is not invariant 00:18:20.972 [2024-07-14 21:17:29.701496] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:20.972 [2024-07-14 21:17:29.786919] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:20.972 [2024-07-14 21:17:29.789216] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:20.972 cpumask for '\''job0'\'' is too big 00:18:20.972 cpumask for '\''job1'\'' is too big 00:18:20.972 cpumask for '\''job2'\'' is too big 00:18:20.972 cpumask for '\''job3'\'' is too big 00:18:20.972 Running I/O for 2 seconds... 00:18:20.972 00:18:20.972 Latency(us) 00:18:20.972 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:20.972 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:20.972 Malloc0 : 2.00 362387.00 353.89 0.00 0.00 706.18 214.11 1511.80 00:18:20.972 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:20.972 Malloc0 : 2.00 362368.71 353.88 0.00 0.00 706.07 161.05 1288.38 00:18:20.972 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:20.972 Malloc0 : 2.00 362418.21 353.92 0.00 0.00 705.83 187.11 1176.67 00:18:20.972 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:20.972 Malloc0 : 2.00 362401.74 353.91 0.00 0.00 705.69 144.29 1184.12 00:18:20.972 =================================================================================================================== 00:18:20.972 Total : 1449575.66 1415.60 0.00 0.00 705.94 144.29 1511.80' 00:18:20.972 21:17:32 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:20.972 21:17:32 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:20.972 21:17:32 bdevperf_config -- bdevperf/test_config.sh@23 -- # [[ 4 == \4 ]] 00:18:20.972 21:17:32 bdevperf_config -- bdevperf/test_config.sh@25 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -C -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:20.972 [2024-07-14 21:17:32.027628] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:20.972 [2024-07-14 21:17:32.027894] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:21.231 EAL: TSC is not safe to use in SMP mode 00:18:21.231 EAL: TSC is not invariant 00:18:21.231 [2024-07-14 21:17:32.560044] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:21.231 [2024-07-14 21:17:32.648710] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:21.231 [2024-07-14 21:17:32.651033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:21.231 cpumask for 'job0' is too big 00:18:21.231 cpumask for 'job1' is too big 00:18:21.231 cpumask for 'job2' is too big 00:18:21.231 cpumask for 'job3' is too big 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/test_config.sh@25 -- # bdevperf_output='Using job config with 4 jobs 00:18:23.758 Running I/O for 2 seconds... 00:18:23.758 00:18:23.758 Latency(us) 00:18:23.758 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:23.758 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:23.758 Malloc0 : 2.00 383398.04 374.41 0.00 0.00 667.50 207.59 1489.46 00:18:23.758 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:23.758 Malloc0 : 2.00 383383.48 374.40 0.00 0.00 667.36 167.56 1266.04 00:18:23.758 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:23.758 Malloc0 : 2.00 383408.52 374.42 0.00 0.00 667.17 215.04 1094.75 00:18:23.758 Job: Malloc0 (Core Mask 0xff, workload: read, depth: 256, IO size: 1024) 00:18:23.758 Malloc0 : 2.00 383390.71 374.40 0.00 0.00 667.05 149.88 1117.09 00:18:23.758 =================================================================================================================== 00:18:23.758 Total : 1533580.74 1497.64 0.00 0.00 667.27 149.88 1489.46' 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/test_config.sh@27 -- # cleanup 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/test_config.sh@29 -- # create_job job0 write Malloc0 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:23.758 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/test_config.sh@30 -- # create_job job1 write Malloc0 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:23.758 21:17:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:23.759 00:18:23.759 21:17:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:23.759 21:17:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:23.759 21:17:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:23.759 21:17:34 bdevperf_config -- bdevperf/test_config.sh@31 -- # create_job job2 write Malloc0 00:18:23.759 21:17:34 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:23.759 21:17:34 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=write 00:18:23.759 21:17:34 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0 00:18:23.759 21:17:34 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:23.759 21:17:34 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:23.759 00:18:23.759 21:17:34 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:23.759 21:17:34 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:23.759 21:17:34 bdevperf_config -- bdevperf/test_config.sh@32 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:26.293 21:17:37 bdevperf_config -- bdevperf/test_config.sh@32 -- # bdevperf_output='[2024-07-14 21:17:34.894655] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:26.293 [2024-07-14 21:17:34.894951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:26.293 Using job config with 3 jobs 00:18:26.293 EAL: TSC is not safe to use in SMP mode 00:18:26.293 EAL: TSC is not invariant 00:18:26.293 [2024-07-14 21:17:35.404035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.293 [2024-07-14 21:17:35.480362] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:26.293 [2024-07-14 21:17:35.482739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.293 cpumask for '\''job0'\'' is too big 00:18:26.293 cpumask for '\''job1'\'' is too big 00:18:26.293 cpumask for '\''job2'\'' is too big 00:18:26.293 Running I/O for 2 seconds... 00:18:26.293 00:18:26.293 Latency(us) 00:18:26.293 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.293 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:26.293 Malloc0 : 2.00 495585.41 483.97 0.00 0.00 516.36 215.04 1042.62 00:18:26.293 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:26.293 Malloc0 : 2.00 495610.32 483.99 0.00 0.00 516.21 159.19 1020.28 00:18:26.293 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:26.293 Malloc0 : 2.00 495594.96 483.98 0.00 0.00 516.15 113.11 997.93 00:18:26.293 =================================================================================================================== 00:18:26.293 Total : 1486790.69 1451.94 0.00 0.00 516.24 113.11 1042.62' 00:18:26.293 21:17:37 bdevperf_config -- bdevperf/test_config.sh@33 -- # get_num_jobs '[2024-07-14 21:17:34.894655] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:26.294 [2024-07-14 21:17:34.894951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:26.294 Using job config with 3 jobs 00:18:26.294 EAL: TSC is not safe to use in SMP mode 00:18:26.294 EAL: TSC is not invariant 00:18:26.294 [2024-07-14 21:17:35.404035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.294 [2024-07-14 21:17:35.480362] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:26.294 [2024-07-14 21:17:35.482739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.294 cpumask for '\''job0'\'' is too big 00:18:26.294 cpumask for '\''job1'\'' is too big 00:18:26.294 cpumask for '\''job2'\'' is too big 00:18:26.294 Running I/O for 2 seconds... 00:18:26.294 00:18:26.294 Latency(us) 00:18:26.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.294 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:26.294 Malloc0 : 2.00 495585.41 483.97 0.00 0.00 516.36 215.04 1042.62 00:18:26.294 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:26.294 Malloc0 : 2.00 495610.32 483.99 0.00 0.00 516.21 159.19 1020.28 00:18:26.294 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:26.294 Malloc0 : 2.00 495594.96 483.98 0.00 0.00 516.15 113.11 997.93 00:18:26.294 =================================================================================================================== 00:18:26.294 Total : 1486790.69 1451.94 0.00 0.00 516.24 113.11 1042.62' 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-14 21:17:34.894655] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:26.294 [2024-07-14 21:17:34.894951] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:26.294 Using job config with 3 jobs 00:18:26.294 EAL: TSC is not safe to use in SMP mode 00:18:26.294 EAL: TSC is not invariant 00:18:26.294 [2024-07-14 21:17:35.404035] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.294 [2024-07-14 21:17:35.480362] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:26.294 [2024-07-14 21:17:35.482739] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.294 cpumask for '\''job0'\'' is too big 00:18:26.294 cpumask for '\''job1'\'' is too big 00:18:26.294 cpumask for '\''job2'\'' is too big 00:18:26.294 Running I/O for 2 seconds... 00:18:26.294 00:18:26.294 Latency(us) 00:18:26.294 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:26.294 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:26.294 Malloc0 : 2.00 495585.41 483.97 0.00 0.00 516.36 215.04 1042.62 00:18:26.294 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:26.294 Malloc0 : 2.00 495610.32 483.99 0.00 0.00 516.21 159.19 1020.28 00:18:26.294 Job: Malloc0 (Core Mask 0xff, workload: write, depth: 256, IO size: 1024) 00:18:26.294 Malloc0 : 2.00 495594.96 483.98 0.00 0.00 516.15 113.11 997.93 00:18:26.294 =================================================================================================================== 00:18:26.294 Total : 1486790.69 1451.94 0.00 0.00 516.24 113.11 1042.62' 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/test_config.sh@33 -- # [[ 3 == \3 ]] 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/test_config.sh@35 -- # cleanup 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/test_config.sh@37 -- # create_job global rw Malloc0:Malloc1 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=global 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@9 -- # local rw=rw 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@10 -- # local filename=Malloc0:Malloc1 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@12 -- # [[ global == \g\l\o\b\a\l ]] 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@13 -- # cat 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@18 -- # job='[global]' 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:26.294 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/test_config.sh@38 -- # create_job job0 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job0 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job0 == \g\l\o\b\a\l ]] 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job0]' 00:18:26.294 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/test_config.sh@39 -- # create_job job1 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job1 00:18:26.294 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job1 == \g\l\o\b\a\l ]] 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job1]' 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/test_config.sh@40 -- # create_job job2 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job2 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job2 == \g\l\o\b\a\l ]] 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job2]' 00:18:26.294 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/test_config.sh@41 -- # create_job job3 00:18:26.294 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@8 -- # local job_section=job3 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@9 -- # local rw= 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@10 -- # local filename= 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@12 -- # [[ job3 == \g\l\o\b\a\l ]] 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@18 -- # job='[job3]' 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@19 -- # echo 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/common.sh@20 -- # cat 00:18:26.294 21:17:37 bdevperf_config -- bdevperf/test_config.sh@42 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -t 2 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/conf.json -j /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:29.625 21:17:40 bdevperf_config -- bdevperf/test_config.sh@42 -- # bdevperf_output='[2024-07-14 21:17:37.721648] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:29.625 [2024-07-14 21:17:37.721919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:29.625 Using job config with 4 jobs 00:18:29.625 EAL: TSC is not safe to use in SMP mode 00:18:29.625 EAL: TSC is not invariant 00:18:29.625 [2024-07-14 21:17:38.222835] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.625 [2024-07-14 21:17:38.295374] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:29.625 [2024-07-14 21:17:38.297694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.625 cpumask for '\''job0'\'' is too big 00:18:29.625 cpumask for '\''job1'\'' is too big 00:18:29.625 cpumask for '\''job2'\'' is too big 00:18:29.625 cpumask for '\''job3'\'' is too big 00:18:29.625 Running I/O for 2 seconds... 00:18:29.625 00:18:29.625 Latency(us) 00:18:29.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.625 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc0 : 2.00 180535.70 176.30 0.00 0.00 1417.68 443.11 2770.39 00:18:29.625 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc1 : 2.00 180528.12 176.30 0.00 0.00 1417.56 389.12 2740.60 00:18:29.625 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc0 : 2.00 180519.47 176.29 0.00 0.00 1417.21 390.98 2293.76 00:18:29.625 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc1 : 2.00 180506.58 176.28 0.00 0.00 1417.13 338.85 2278.87 00:18:29.625 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc0 : 2.00 180555.16 176.32 0.00 0.00 1416.33 377.95 2308.66 00:18:29.625 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc1 : 2.00 180545.88 176.31 0.00 0.00 1416.24 333.27 2323.55 00:18:29.625 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc0 : 2.00 180536.24 176.30 0.00 0.00 1415.97 418.91 2278.87 00:18:29.625 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc1 : 2.00 180526.51 176.30 0.00 0.00 1415.85 335.13 2249.08 00:18:29.625 =================================================================================================================== 00:18:29.625 Total : 1444253.64 1410.40 0.00 0.00 1416.75 333.27 2770.39' 00:18:29.625 21:17:40 bdevperf_config -- bdevperf/test_config.sh@43 -- # get_num_jobs '[2024-07-14 21:17:37.721648] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:29.625 [2024-07-14 21:17:37.721919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:29.625 Using job config with 4 jobs 00:18:29.625 EAL: TSC is not safe to use in SMP mode 00:18:29.625 EAL: TSC is not invariant 00:18:29.625 [2024-07-14 21:17:38.222835] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.625 [2024-07-14 21:17:38.295374] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:29.625 [2024-07-14 21:17:38.297694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.625 cpumask for '\''job0'\'' is too big 00:18:29.625 cpumask for '\''job1'\'' is too big 00:18:29.625 cpumask for '\''job2'\'' is too big 00:18:29.625 cpumask for '\''job3'\'' is too big 00:18:29.625 Running I/O for 2 seconds... 00:18:29.625 00:18:29.625 Latency(us) 00:18:29.625 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.625 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc0 : 2.00 180535.70 176.30 0.00 0.00 1417.68 443.11 2770.39 00:18:29.625 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc1 : 2.00 180528.12 176.30 0.00 0.00 1417.56 389.12 2740.60 00:18:29.625 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc0 : 2.00 180519.47 176.29 0.00 0.00 1417.21 390.98 2293.76 00:18:29.625 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc1 : 2.00 180506.58 176.28 0.00 0.00 1417.13 338.85 2278.87 00:18:29.625 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc0 : 2.00 180555.16 176.32 0.00 0.00 1416.33 377.95 2308.66 00:18:29.625 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc1 : 2.00 180545.88 176.31 0.00 0.00 1416.24 333.27 2323.55 00:18:29.625 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc0 : 2.00 180536.24 176.30 0.00 0.00 1415.97 418.91 2278.87 00:18:29.625 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.625 Malloc1 : 2.00 180526.51 176.30 0.00 0.00 1415.85 335.13 2249.08 00:18:29.625 =================================================================================================================== 00:18:29.625 Total : 1444253.64 1410.40 0.00 0.00 1416.75 333.27 2770.39' 00:18:29.625 21:17:40 bdevperf_config -- bdevperf/common.sh@32 -- # echo '[2024-07-14 21:17:37.721648] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:29.626 [2024-07-14 21:17:37.721919] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:29.626 Using job config with 4 jobs 00:18:29.626 EAL: TSC is not safe to use in SMP mode 00:18:29.626 EAL: TSC is not invariant 00:18:29.626 [2024-07-14 21:17:38.222835] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.626 [2024-07-14 21:17:38.295374] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:29.626 [2024-07-14 21:17:38.297694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:29.626 cpumask for '\''job0'\'' is too big 00:18:29.626 cpumask for '\''job1'\'' is too big 00:18:29.626 cpumask for '\''job2'\'' is too big 00:18:29.626 cpumask for '\''job3'\'' is too big 00:18:29.626 Running I/O for 2 seconds... 00:18:29.626 00:18:29.626 Latency(us) 00:18:29.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.626 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.626 Malloc0 : 2.00 180535.70 176.30 0.00 0.00 1417.68 443.11 2770.39 00:18:29.626 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.626 Malloc1 : 2.00 180528.12 176.30 0.00 0.00 1417.56 389.12 2740.60 00:18:29.626 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.626 Malloc0 : 2.00 180519.47 176.29 0.00 0.00 1417.21 390.98 2293.76 00:18:29.626 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.626 Malloc1 : 2.00 180506.58 176.28 0.00 0.00 1417.13 338.85 2278.87 00:18:29.626 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.626 Malloc0 : 2.00 180555.16 176.32 0.00 0.00 1416.33 377.95 2308.66 00:18:29.626 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.626 Malloc1 : 2.00 180545.88 176.31 0.00 0.00 1416.24 333.27 2323.55 00:18:29.626 Job: Malloc0 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.626 Malloc0 : 2.00 180536.24 176.30 0.00 0.00 1415.97 418.91 2278.87 00:18:29.626 Job: Malloc1 (Core Mask 0xff, workload: rw, percentage: 70, depth: 256, IO size: 1024) 00:18:29.626 Malloc1 : 2.00 180526.51 176.30 0.00 0.00 1415.85 335.13 2249.08 00:18:29.626 =================================================================================================================== 00:18:29.626 Total : 1444253.64 1410.40 0.00 0.00 1416.75 333.27 2770.39' 00:18:29.626 21:17:40 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE 'Using job config with [0-9]+ jobs' 00:18:29.626 21:17:40 bdevperf_config -- bdevperf/common.sh@32 -- # grep -oE '[0-9]+' 00:18:29.626 21:17:40 bdevperf_config -- bdevperf/test_config.sh@43 -- # [[ 4 == \4 ]] 00:18:29.626 21:17:40 bdevperf_config -- bdevperf/test_config.sh@44 -- # cleanup 00:18:29.626 21:17:40 bdevperf_config -- bdevperf/common.sh@36 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdevperf/test.conf 00:18:29.626 21:17:40 bdevperf_config -- bdevperf/test_config.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:18:29.626 00:18:29.626 real 0m11.552s 00:18:29.626 user 0m9.220s 00:18:29.626 sys 0m2.335s 00:18:29.626 21:17:40 bdevperf_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:29.626 ************************************ 00:18:29.626 END TEST bdevperf_config 00:18:29.626 ************************************ 00:18:29.626 21:17:40 bdevperf_config -- common/autotest_common.sh@10 -- # set +x 00:18:29.626 21:17:40 -- common/autotest_common.sh@1142 -- # return 0 00:18:29.626 21:17:40 -- spdk/autotest.sh@192 -- # uname -s 00:18:29.626 21:17:40 -- spdk/autotest.sh@192 -- # [[ FreeBSD == Linux ]] 00:18:29.626 21:17:40 -- spdk/autotest.sh@198 -- # uname -s 00:18:29.626 21:17:40 -- spdk/autotest.sh@198 -- # [[ FreeBSD == Linux ]] 00:18:29.626 21:17:40 -- spdk/autotest.sh@211 -- # '[' 1 -eq 1 ']' 00:18:29.626 21:17:40 -- spdk/autotest.sh@212 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:18:29.626 21:17:40 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:29.626 21:17:40 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:29.626 21:17:40 -- common/autotest_common.sh@10 -- # set +x 00:18:29.626 ************************************ 00:18:29.626 START TEST blockdev_nvme 00:18:29.626 ************************************ 00:18:29.626 21:17:40 blockdev_nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:18:29.626 * Looking for test storage... 00:18:29.626 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:18:29.626 21:17:40 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@670 -- # QOS_DEV_1=Malloc_0 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@671 -- # QOS_DEV_2=Null_1 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@672 -- # QOS_RUN_TIME=5 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@674 -- # uname -s 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@674 -- # '[' FreeBSD = Linux ']' 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@679 -- # PRE_RESERVED_MEM=2048 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@682 -- # test_type=nvme 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@683 -- # crypto_device= 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@684 -- # dek= 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@685 -- # env_ctx= 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@686 -- # wait_for_rpc= 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@687 -- # '[' -n '' ']' 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == bdev ]] 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@690 -- # [[ nvme == crypto_* ]] 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@693 -- # start_spdk_tgt 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=68108 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:18:29.626 21:17:40 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 68108 00:18:29.626 21:17:40 blockdev_nvme -- common/autotest_common.sh@829 -- # '[' -z 68108 ']' 00:18:29.626 21:17:40 blockdev_nvme -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:29.626 21:17:40 blockdev_nvme -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:29.626 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:29.626 21:17:40 blockdev_nvme -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:29.626 21:17:40 blockdev_nvme -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:29.626 21:17:40 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:29.626 [2024-07-14 21:17:40.751961] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:29.626 [2024-07-14 21:17:40.752210] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:29.884 EAL: TSC is not safe to use in SMP mode 00:18:29.884 EAL: TSC is not invariant 00:18:29.884 [2024-07-14 21:17:41.270177] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:29.884 [2024-07-14 21:17:41.342669] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:29.884 [2024-07-14 21:17:41.345087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:30.449 21:17:41 blockdev_nvme -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:30.449 21:17:41 blockdev_nvme -- common/autotest_common.sh@862 -- # return 0 00:18:30.449 21:17:41 blockdev_nvme -- bdev/blockdev.sh@694 -- # case "$test_type" in 00:18:30.449 21:17:41 blockdev_nvme -- bdev/blockdev.sh@699 -- # setup_nvme_conf 00:18:30.449 21:17:41 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:18:30.449 21:17:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:18:30.449 21:17:41 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:30.449 21:17:41 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } } ] }'\''' 00:18:30.449 21:17:41 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.449 21:17:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:30.449 [2024-07-14 21:17:41.805925] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:30.449 21:17:41 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.449 21:17:41 blockdev_nvme -- bdev/blockdev.sh@737 -- # rpc_cmd bdev_wait_for_examine 00:18:30.449 21:17:41 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.449 21:17:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:30.449 21:17:41 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.449 21:17:41 blockdev_nvme -- bdev/blockdev.sh@740 -- # cat 00:18:30.449 21:17:41 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n accel 00:18:30.449 21:17:41 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.450 21:17:41 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n bdev 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.450 21:17:41 blockdev_nvme -- bdev/blockdev.sh@740 -- # rpc_cmd save_subsystem_config -n iobuf 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.450 21:17:41 blockdev_nvme -- bdev/blockdev.sh@748 -- # mapfile -t bdevs 00:18:30.450 21:17:41 blockdev_nvme -- bdev/blockdev.sh@748 -- # rpc_cmd bdev_get_bdevs 00:18:30.450 21:17:41 blockdev_nvme -- bdev/blockdev.sh@748 -- # jq -r '.[] | select(.claimed == false)' 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:30.450 21:17:41 blockdev_nvme -- bdev/blockdev.sh@749 -- # mapfile -t bdevs_name 00:18:30.450 21:17:41 blockdev_nvme -- bdev/blockdev.sh@749 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "81619ce7-4226-11ef-aa83-81fbc7dfef58"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "81619ce7-4226-11ef-aa83-81fbc7dfef58",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:18:30.450 21:17:41 blockdev_nvme -- bdev/blockdev.sh@749 -- # jq -r .name 00:18:30.450 21:17:41 blockdev_nvme -- bdev/blockdev.sh@750 -- # bdev_list=("${bdevs_name[@]}") 00:18:30.450 21:17:41 blockdev_nvme -- bdev/blockdev.sh@752 -- # hello_world_bdev=Nvme0n1 00:18:30.450 21:17:41 blockdev_nvme -- bdev/blockdev.sh@753 -- # trap - SIGINT SIGTERM EXIT 00:18:30.450 21:17:41 blockdev_nvme -- bdev/blockdev.sh@754 -- # killprocess 68108 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@948 -- # '[' -z 68108 ']' 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@952 -- # kill -0 68108 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@953 -- # uname 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@956 -- # ps -c -o command 68108 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@956 -- # tail -1 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:18:30.450 killing process with pid 68108 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68108' 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@967 -- # kill 68108 00:18:30.450 21:17:41 blockdev_nvme -- common/autotest_common.sh@972 -- # wait 68108 00:18:30.708 21:17:42 blockdev_nvme -- bdev/blockdev.sh@758 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:30.708 21:17:42 blockdev_nvme -- bdev/blockdev.sh@760 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:30.708 21:17:42 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:18:30.708 21:17:42 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:30.708 21:17:42 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:30.708 ************************************ 00:18:30.708 START TEST bdev_hello_world 00:18:30.708 ************************************ 00:18:30.708 21:17:42 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:18:30.965 [2024-07-14 21:17:42.256176] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:30.965 [2024-07-14 21:17:42.256343] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:31.531 EAL: TSC is not safe to use in SMP mode 00:18:31.531 EAL: TSC is not invariant 00:18:31.531 [2024-07-14 21:17:42.802673] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.531 [2024-07-14 21:17:42.888490] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:31.531 [2024-07-14 21:17:42.891003] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.531 [2024-07-14 21:17:42.948743] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:31.531 [2024-07-14 21:17:43.021191] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:18:31.531 [2024-07-14 21:17:43.021221] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:18:31.531 [2024-07-14 21:17:43.021231] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:18:31.531 [2024-07-14 21:17:43.024258] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:18:31.531 [2024-07-14 21:17:43.024808] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:18:31.531 [2024-07-14 21:17:43.024834] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:18:31.531 [2024-07-14 21:17:43.025078] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:18:31.531 00:18:31.531 [2024-07-14 21:17:43.025102] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:18:31.789 00:18:31.789 real 0m0.951s 00:18:31.789 user 0m0.352s 00:18:31.789 sys 0m0.597s 00:18:31.789 21:17:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:31.789 21:17:43 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:31.789 ************************************ 00:18:31.789 END TEST bdev_hello_world 00:18:31.789 ************************************ 00:18:31.789 21:17:43 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:18:31.789 21:17:43 blockdev_nvme -- bdev/blockdev.sh@761 -- # run_test bdev_bounds bdev_bounds '' 00:18:31.789 21:17:43 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:31.789 21:17:43 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:31.789 21:17:43 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:31.789 ************************************ 00:18:31.789 START TEST bdev_bounds 00:18:31.789 ************************************ 00:18:31.789 21:17:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1123 -- # bdev_bounds '' 00:18:31.789 21:17:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # bdevio_pid=68175 00:18:31.789 21:17:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:18:31.789 Process bdevio pid: 68175 00:18:31.789 21:17:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 2048 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:18:31.789 21:17:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # echo 'Process bdevio pid: 68175' 00:18:31.789 21:17:43 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # waitforlisten 68175 00:18:31.789 21:17:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@829 -- # '[' -z 68175 ']' 00:18:31.789 21:17:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:31.789 21:17:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:31.789 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:31.789 21:17:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:31.789 21:17:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:31.789 21:17:43 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:31.789 [2024-07-14 21:17:43.260475] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:31.789 [2024-07-14 21:17:43.260762] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 2048 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:32.360 EAL: TSC is not safe to use in SMP mode 00:18:32.360 EAL: TSC is not invariant 00:18:32.360 [2024-07-14 21:17:43.794594] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:32.360 [2024-07-14 21:17:43.876767] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:32.360 [2024-07-14 21:17:43.876844] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:32.360 [2024-07-14 21:17:43.876869] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:18:32.360 [2024-07-14 21:17:43.880349] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:32.360 [2024-07-14 21:17:43.880251] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.360 [2024-07-14 21:17:43.880342] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:32.618 [2024-07-14 21:17:43.939113] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:32.876 21:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:32.876 21:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@862 -- # return 0 00:18:32.876 21:17:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:18:32.876 I/O targets: 00:18:32.876 Nvme0n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:18:32.876 00:18:32.876 00:18:32.876 CUnit - A unit testing framework for C - Version 2.1-3 00:18:32.876 http://cunit.sourceforge.net/ 00:18:32.876 00:18:32.876 00:18:32.876 Suite: bdevio tests on: Nvme0n1 00:18:32.876 Test: blockdev write read block ...passed 00:18:32.876 Test: blockdev write zeroes read block ...passed 00:18:32.876 Test: blockdev write zeroes read no split ...passed 00:18:32.876 Test: blockdev write zeroes read split ...passed 00:18:32.876 Test: blockdev write zeroes read split partial ...passed 00:18:32.876 Test: blockdev reset ...[2024-07-14 21:17:44.379840] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:18:32.876 [2024-07-14 21:17:44.381834] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:32.876 passed 00:18:32.876 Test: blockdev write read 8 blocks ...passed 00:18:32.876 Test: blockdev write read size > 128k ...passed 00:18:32.876 Test: blockdev write read invalid size ...passed 00:18:32.876 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:32.876 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:32.876 Test: blockdev write read max offset ...passed 00:18:32.876 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:32.876 Test: blockdev writev readv 8 blocks ...passed 00:18:32.876 Test: blockdev writev readv 30 x 1block ...passed 00:18:32.876 Test: blockdev writev readv block ...passed 00:18:32.876 Test: blockdev writev readv size > 128k ...passed 00:18:32.876 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:32.876 Test: blockdev comparev and writev ...[2024-07-14 21:17:44.386301] nvme_qpair.c: 247:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x180008000 len:0x1000 00:18:32.876 [2024-07-14 21:17:44.386379] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:18:32.876 passed 00:18:32.876 Test: blockdev nvme passthru rw ...passed 00:18:32.876 Test: blockdev nvme passthru vendor specific ...[2024-07-14 21:17:44.386864] nvme_qpair.c: 220:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:18:32.876 passed 00:18:32.876 Test: blockdev nvme admin passthru ...[2024-07-14 21:17:44.386888] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:18:32.876 passed 00:18:32.876 Test: blockdev copy ...passed 00:18:32.876 00:18:32.876 Run Summary: Type Total Ran Passed Failed Inactive 00:18:32.876 suites 1 1 n/a 0 0 00:18:32.876 tests 23 23 23 0 0 00:18:32.876 asserts 152 152 152 0 n/a 00:18:32.876 00:18:32.876 Elapsed time = 0.039 seconds 00:18:32.876 0 00:18:32.876 21:17:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # killprocess 68175 00:18:32.876 21:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@948 -- # '[' -z 68175 ']' 00:18:32.876 21:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@952 -- # kill -0 68175 00:18:32.876 21:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # uname 00:18:32.876 21:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:18:32.876 21:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # ps -c -o command 68175 00:18:32.876 21:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # tail -1 00:18:32.876 21:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@956 -- # process_name=bdevio 00:18:32.876 21:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # '[' bdevio = sudo ']' 00:18:32.876 killing process with pid 68175 00:18:32.876 21:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68175' 00:18:32.876 21:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@967 -- # kill 68175 00:18:32.876 21:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # wait 68175 00:18:33.134 21:17:44 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@296 -- # trap - SIGINT SIGTERM EXIT 00:18:33.135 00:18:33.135 real 0m1.350s 00:18:33.135 user 0m2.581s 00:18:33.135 sys 0m0.571s 00:18:33.135 21:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:33.135 21:17:44 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:18:33.135 ************************************ 00:18:33.135 END TEST bdev_bounds 00:18:33.135 ************************************ 00:18:33.135 21:17:44 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:18:33.135 21:17:44 blockdev_nvme -- bdev/blockdev.sh@762 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:18:33.135 21:17:44 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:18:33.135 21:17:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:33.135 21:17:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:33.135 ************************************ 00:18:33.135 START TEST bdev_nbd 00:18:33.135 ************************************ 00:18:33.135 21:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1123 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json Nvme0n1 '' 00:18:33.135 21:17:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # uname -s 00:18:33.135 21:17:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # [[ FreeBSD == Linux ]] 00:18:33.135 21:17:44 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@300 -- # return 0 00:18:33.135 00:18:33.135 real 0m0.005s 00:18:33.135 user 0m0.001s 00:18:33.135 sys 0m0.007s 00:18:33.135 21:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:33.135 21:17:44 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:18:33.135 ************************************ 00:18:33.135 END TEST bdev_nbd 00:18:33.135 ************************************ 00:18:33.393 21:17:44 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:18:33.393 21:17:44 blockdev_nvme -- bdev/blockdev.sh@763 -- # [[ y == y ]] 00:18:33.393 21:17:44 blockdev_nvme -- bdev/blockdev.sh@764 -- # '[' nvme = nvme ']' 00:18:33.393 skipping fio tests on NVMe due to multi-ns failures. 00:18:33.393 21:17:44 blockdev_nvme -- bdev/blockdev.sh@766 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:18:33.393 21:17:44 blockdev_nvme -- bdev/blockdev.sh@775 -- # trap cleanup SIGINT SIGTERM EXIT 00:18:33.393 21:17:44 blockdev_nvme -- bdev/blockdev.sh@777 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:33.393 21:17:44 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:18:33.393 21:17:44 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:33.393 21:17:44 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:33.393 ************************************ 00:18:33.393 START TEST bdev_verify 00:18:33.393 ************************************ 00:18:33.393 21:17:44 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:18:33.393 [2024-07-14 21:17:44.718000] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:33.393 [2024-07-14 21:17:44.718350] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:33.959 EAL: TSC is not safe to use in SMP mode 00:18:33.959 EAL: TSC is not invariant 00:18:33.959 [2024-07-14 21:17:45.245781] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:33.959 [2024-07-14 21:17:45.334770] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:33.959 [2024-07-14 21:17:45.334834] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:33.959 [2024-07-14 21:17:45.337795] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:33.959 [2024-07-14 21:17:45.337786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:33.959 [2024-07-14 21:17:45.396448] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:33.959 Running I/O for 5 seconds... 00:18:39.227 00:18:39.227 Latency(us) 00:18:39.227 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:39.227 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:18:39.227 Verification LBA range: start 0x0 length 0xa0000 00:18:39.227 Nvme0n1 : 5.01 19630.02 76.68 0.00 0.00 6511.63 763.35 11260.28 00:18:39.227 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:39.227 Verification LBA range: start 0xa0000 length 0xa0000 00:18:39.227 Nvme0n1 : 5.00 19564.49 76.42 0.00 0.00 6532.70 770.79 11081.55 00:18:39.227 =================================================================================================================== 00:18:39.227 Total : 39194.51 153.10 0.00 0.00 6522.15 763.35 11260.28 00:18:39.802 00:18:39.802 real 0m6.497s 00:18:39.802 user 0m11.575s 00:18:39.802 sys 0m0.588s 00:18:39.802 21:17:51 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:39.802 ************************************ 00:18:39.802 END TEST bdev_verify 00:18:39.802 ************************************ 00:18:39.802 21:17:51 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:18:39.802 21:17:51 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:18:39.802 21:17:51 blockdev_nvme -- bdev/blockdev.sh@778 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:39.802 21:17:51 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 16 -le 1 ']' 00:18:39.802 21:17:51 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:39.802 21:17:51 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:39.802 ************************************ 00:18:39.802 START TEST bdev_verify_big_io 00:18:39.802 ************************************ 00:18:39.802 21:17:51 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:18:39.802 [2024-07-14 21:17:51.277511] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:39.802 [2024-07-14 21:17:51.277739] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:40.855 EAL: TSC is not safe to use in SMP mode 00:18:40.855 EAL: TSC is not invariant 00:18:40.855 [2024-07-14 21:17:51.997423] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:18:40.855 [2024-07-14 21:17:52.103489] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:40.855 [2024-07-14 21:17:52.103553] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:18:40.855 [2024-07-14 21:17:52.106866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:40.855 [2024-07-14 21:17:52.106855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:40.855 [2024-07-14 21:17:52.168097] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:40.855 Running I/O for 5 seconds... 00:18:46.141 00:18:46.141 Latency(us) 00:18:46.141 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:46.141 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:18:46.141 Verification LBA range: start 0x0 length 0xa000 00:18:46.141 Nvme0n1 : 5.01 9180.90 573.81 0.00 0.00 13863.91 207.59 25976.10 00:18:46.141 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:18:46.141 Verification LBA range: start 0xa000 length 0xa000 00:18:46.141 Nvme0n1 : 5.01 9181.43 573.84 0.00 0.00 13863.50 139.64 23950.44 00:18:46.141 =================================================================================================================== 00:18:46.141 Total : 18362.34 1147.65 0.00 0.00 13863.70 139.64 25976.10 00:18:49.427 00:18:49.427 real 0m9.137s 00:18:49.427 user 0m16.503s 00:18:49.427 sys 0m0.756s 00:18:49.427 ************************************ 00:18:49.427 END TEST bdev_verify_big_io 00:18:49.427 ************************************ 00:18:49.427 21:18:00 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:49.427 21:18:00 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:18:49.427 21:18:00 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:18:49.427 21:18:00 blockdev_nvme -- bdev/blockdev.sh@779 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:49.427 21:18:00 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:18:49.427 21:18:00 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:49.427 21:18:00 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:49.427 ************************************ 00:18:49.427 START TEST bdev_write_zeroes 00:18:49.427 ************************************ 00:18:49.427 21:18:00 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:49.427 [2024-07-14 21:18:00.470103] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:49.427 [2024-07-14 21:18:00.470275] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:49.686 EAL: TSC is not safe to use in SMP mode 00:18:49.686 EAL: TSC is not invariant 00:18:49.686 [2024-07-14 21:18:01.014834] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.686 [2024-07-14 21:18:01.118668] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:49.686 [2024-07-14 21:18:01.121638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:49.686 [2024-07-14 21:18:01.183319] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:49.944 Running I/O for 1 seconds... 00:18:50.875 00:18:50.875 Latency(us) 00:18:50.875 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.875 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:18:50.875 Nvme0n1 : 1.00 79499.71 310.55 0.00 0.00 1608.79 325.82 10962.39 00:18:50.875 =================================================================================================================== 00:18:50.875 Total : 79499.71 310.55 0.00 0.00 1608.79 325.82 10962.39 00:18:51.135 00:18:51.135 real 0m1.984s 00:18:51.135 user 0m1.398s 00:18:51.135 sys 0m0.582s 00:18:51.135 ************************************ 00:18:51.135 END TEST bdev_write_zeroes 00:18:51.135 ************************************ 00:18:51.135 21:18:02 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:51.135 21:18:02 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:18:51.135 21:18:02 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 0 00:18:51.135 21:18:02 blockdev_nvme -- bdev/blockdev.sh@782 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:51.135 21:18:02 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:18:51.135 21:18:02 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:51.135 21:18:02 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:51.135 ************************************ 00:18:51.135 START TEST bdev_json_nonenclosed 00:18:51.135 ************************************ 00:18:51.135 21:18:02 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:51.135 [2024-07-14 21:18:02.494864] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:51.135 [2024-07-14 21:18:02.495103] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:51.702 EAL: TSC is not safe to use in SMP mode 00:18:51.702 EAL: TSC is not invariant 00:18:51.702 [2024-07-14 21:18:03.065105] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.702 [2024-07-14 21:18:03.154611] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:51.702 [2024-07-14 21:18:03.157010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:51.702 [2024-07-14 21:18:03.157081] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:18:51.702 [2024-07-14 21:18:03.157092] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:51.702 [2024-07-14 21:18:03.157100] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:51.959 00:18:51.959 real 0m0.775s 00:18:51.959 user 0m0.173s 00:18:51.959 sys 0m0.603s 00:18:51.959 ************************************ 00:18:51.959 END TEST bdev_json_nonenclosed 00:18:51.960 ************************************ 00:18:51.960 21:18:03 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1123 -- # es=234 00:18:51.960 21:18:03 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:51.960 21:18:03 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:18:51.960 21:18:03 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:18:51.960 21:18:03 blockdev_nvme -- bdev/blockdev.sh@782 -- # true 00:18:51.960 21:18:03 blockdev_nvme -- bdev/blockdev.sh@785 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:51.960 21:18:03 blockdev_nvme -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:18:51.960 21:18:03 blockdev_nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:51.960 21:18:03 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:51.960 ************************************ 00:18:51.960 START TEST bdev_json_nonarray 00:18:51.960 ************************************ 00:18:51.960 21:18:03 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:18:51.960 [2024-07-14 21:18:03.315982] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:18:51.960 [2024-07-14 21:18:03.316221] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:18:52.525 EAL: TSC is not safe to use in SMP mode 00:18:52.525 EAL: TSC is not invariant 00:18:52.525 [2024-07-14 21:18:03.844092] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.525 [2024-07-14 21:18:03.935970] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:18:52.525 [2024-07-14 21:18:03.938278] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:52.525 [2024-07-14 21:18:03.938352] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:18:52.525 [2024-07-14 21:18:03.938364] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:18:52.525 [2024-07-14 21:18:03.938374] app.c:1052:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:18:52.525 00:18:52.525 real 0m0.738s 00:18:52.525 user 0m0.183s 00:18:52.525 sys 0m0.554s 00:18:52.525 21:18:04 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1123 -- # es=234 00:18:52.525 21:18:04 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:52.525 ************************************ 00:18:52.525 END TEST bdev_json_nonarray 00:18:52.525 ************************************ 00:18:52.525 21:18:04 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:18:52.782 21:18:04 blockdev_nvme -- common/autotest_common.sh@1142 -- # return 234 00:18:52.782 21:18:04 blockdev_nvme -- bdev/blockdev.sh@785 -- # true 00:18:52.782 21:18:04 blockdev_nvme -- bdev/blockdev.sh@787 -- # [[ nvme == bdev ]] 00:18:52.782 21:18:04 blockdev_nvme -- bdev/blockdev.sh@794 -- # [[ nvme == gpt ]] 00:18:52.782 21:18:04 blockdev_nvme -- bdev/blockdev.sh@798 -- # [[ nvme == crypto_sw ]] 00:18:52.782 21:18:04 blockdev_nvme -- bdev/blockdev.sh@810 -- # trap - SIGINT SIGTERM EXIT 00:18:52.782 21:18:04 blockdev_nvme -- bdev/blockdev.sh@811 -- # cleanup 00:18:52.782 21:18:04 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:18:52.782 21:18:04 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:18:52.782 21:18:04 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:18:52.782 21:18:04 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:18:52.782 21:18:04 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:18:52.782 21:18:04 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:18:52.782 00:18:52.782 real 0m23.501s 00:18:52.782 user 0m34.455s 00:18:52.782 sys 0m5.195s 00:18:52.782 21:18:04 blockdev_nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:52.782 21:18:04 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:18:52.782 ************************************ 00:18:52.782 END TEST blockdev_nvme 00:18:52.782 ************************************ 00:18:52.782 21:18:04 -- common/autotest_common.sh@1142 -- # return 0 00:18:52.782 21:18:04 -- spdk/autotest.sh@213 -- # uname -s 00:18:52.782 21:18:04 -- spdk/autotest.sh@213 -- # [[ FreeBSD == Linux ]] 00:18:52.782 21:18:04 -- spdk/autotest.sh@216 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:18:52.782 21:18:04 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:52.782 21:18:04 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:52.782 21:18:04 -- common/autotest_common.sh@10 -- # set +x 00:18:52.782 ************************************ 00:18:52.782 START TEST nvme 00:18:52.782 ************************************ 00:18:52.782 21:18:04 nvme -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:18:52.782 * Looking for test storage... 00:18:52.782 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:18:52.782 21:18:04 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:18:53.039 hw.nic_uio.bdfs="0:16:0" 00:18:53.039 21:18:04 nvme -- nvme/nvme.sh@79 -- # uname 00:18:53.039 21:18:04 nvme -- nvme/nvme.sh@79 -- # '[' FreeBSD = Linux ']' 00:18:53.039 21:18:04 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:18:53.039 21:18:04 nvme -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:18:53.039 21:18:04 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.039 21:18:04 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:53.039 ************************************ 00:18:53.039 START TEST nvme_reset 00:18:53.039 ************************************ 00:18:53.039 21:18:04 nvme.nvme_reset -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:18:53.605 EAL: TSC is not safe to use in SMP mode 00:18:53.605 EAL: TSC is not invariant 00:18:53.605 [2024-07-14 21:18:05.012671] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:53.605 Initializing NVMe Controllers 00:18:53.605 Skipping QEMU NVMe SSD at 0000:00:10.0 00:18:53.605 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:18:53.605 00:18:53.605 real 0m0.581s 00:18:53.605 user 0m0.006s 00:18:53.605 sys 0m0.575s 00:18:53.605 21:18:05 nvme.nvme_reset -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:53.605 ************************************ 00:18:53.605 END TEST nvme_reset 00:18:53.605 ************************************ 00:18:53.605 21:18:05 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:18:53.605 21:18:05 nvme -- common/autotest_common.sh@1142 -- # return 0 00:18:53.605 21:18:05 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:18:53.605 21:18:05 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:53.605 21:18:05 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:53.605 21:18:05 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:53.605 ************************************ 00:18:53.605 START TEST nvme_identify 00:18:53.605 ************************************ 00:18:53.605 21:18:05 nvme.nvme_identify -- common/autotest_common.sh@1123 -- # nvme_identify 00:18:53.605 21:18:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:18:53.605 21:18:05 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:18:53.605 21:18:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:18:53.605 21:18:05 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:18:53.605 21:18:05 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # bdfs=() 00:18:53.605 21:18:05 nvme.nvme_identify -- common/autotest_common.sh@1513 -- # local bdfs 00:18:53.605 21:18:05 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:18:53.605 21:18:05 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:18:53.605 21:18:05 nvme.nvme_identify -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:18:53.605 21:18:05 nvme.nvme_identify -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:18:53.605 21:18:05 nvme.nvme_identify -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:18:53.605 21:18:05 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:18:54.170 EAL: TSC is not safe to use in SMP mode 00:18:54.170 EAL: TSC is not invariant 00:18:54.170 [2024-07-14 21:18:05.687198] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:54.170 ===================================================== 00:18:54.170 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:54.170 ===================================================== 00:18:54.170 Controller Capabilities/Features 00:18:54.170 ================================ 00:18:54.170 Vendor ID: 1b36 00:18:54.170 Subsystem Vendor ID: 1af4 00:18:54.170 Serial Number: 12340 00:18:54.170 Model Number: QEMU NVMe Ctrl 00:18:54.170 Firmware Version: 8.0.0 00:18:54.170 Recommended Arb Burst: 6 00:18:54.170 IEEE OUI Identifier: 00 54 52 00:18:54.170 Multi-path I/O 00:18:54.170 May have multiple subsystem ports: No 00:18:54.170 May have multiple controllers: No 00:18:54.170 Associated with SR-IOV VF: No 00:18:54.170 Max Data Transfer Size: 524288 00:18:54.170 Max Number of Namespaces: 256 00:18:54.170 Max Number of I/O Queues: 64 00:18:54.170 NVMe Specification Version (VS): 1.4 00:18:54.170 NVMe Specification Version (Identify): 1.4 00:18:54.170 Maximum Queue Entries: 2048 00:18:54.170 Contiguous Queues Required: Yes 00:18:54.170 Arbitration Mechanisms Supported 00:18:54.170 Weighted Round Robin: Not Supported 00:18:54.170 Vendor Specific: Not Supported 00:18:54.170 Reset Timeout: 7500 ms 00:18:54.170 Doorbell Stride: 4 bytes 00:18:54.170 NVM Subsystem Reset: Not Supported 00:18:54.170 Command Sets Supported 00:18:54.170 NVM Command Set: Supported 00:18:54.170 Boot Partition: Not Supported 00:18:54.171 Memory Page Size Minimum: 4096 bytes 00:18:54.171 Memory Page Size Maximum: 65536 bytes 00:18:54.171 Persistent Memory Region: Not Supported 00:18:54.171 Optional Asynchronous Events Supported 00:18:54.171 Namespace Attribute Notices: Supported 00:18:54.171 Firmware Activation Notices: Not Supported 00:18:54.171 ANA Change Notices: Not Supported 00:18:54.171 PLE Aggregate Log Change Notices: Not Supported 00:18:54.171 LBA Status Info Alert Notices: Not Supported 00:18:54.171 EGE Aggregate Log Change Notices: Not Supported 00:18:54.171 Normal NVM Subsystem Shutdown event: Not Supported 00:18:54.171 Zone Descriptor Change Notices: Not Supported 00:18:54.171 Discovery Log Change Notices: Not Supported 00:18:54.171 Controller Attributes 00:18:54.171 128-bit Host Identifier: Not Supported 00:18:54.171 Non-Operational Permissive Mode: Not Supported 00:18:54.171 NVM Sets: Not Supported 00:18:54.171 Read Recovery Levels: Not Supported 00:18:54.171 Endurance Groups: Not Supported 00:18:54.171 Predictable Latency Mode: Not Supported 00:18:54.171 Traffic Based Keep ALive: Not Supported 00:18:54.171 Namespace Granularity: Not Supported 00:18:54.171 SQ Associations: Not Supported 00:18:54.171 UUID List: Not Supported 00:18:54.171 Multi-Domain Subsystem: Not Supported 00:18:54.171 Fixed Capacity Management: Not Supported 00:18:54.171 Variable Capacity Management: Not Supported 00:18:54.171 Delete Endurance Group: Not Supported 00:18:54.171 Delete NVM Set: Not Supported 00:18:54.171 Extended LBA Formats Supported: Supported 00:18:54.171 Flexible Data Placement Supported: Not Supported 00:18:54.171 00:18:54.171 Controller Memory Buffer Support 00:18:54.171 ================================ 00:18:54.171 Supported: No 00:18:54.171 00:18:54.171 Persistent Memory Region Support 00:18:54.171 ================================ 00:18:54.171 Supported: No 00:18:54.171 00:18:54.171 Admin Command Set Attributes 00:18:54.171 ============================ 00:18:54.171 Security Send/Receive: Not Supported 00:18:54.171 Format NVM: Supported 00:18:54.171 Firmware Activate/Download: Not Supported 00:18:54.171 Namespace Management: Supported 00:18:54.171 Device Self-Test: Not Supported 00:18:54.171 Directives: Supported 00:18:54.171 NVMe-MI: Not Supported 00:18:54.171 Virtualization Management: Not Supported 00:18:54.171 Doorbell Buffer Config: Supported 00:18:54.171 Get LBA Status Capability: Not Supported 00:18:54.171 Command & Feature Lockdown Capability: Not Supported 00:18:54.171 Abort Command Limit: 4 00:18:54.171 Async Event Request Limit: 4 00:18:54.171 Number of Firmware Slots: N/A 00:18:54.171 Firmware Slot 1 Read-Only: N/A 00:18:54.171 Firmware Activation Without Reset: N/A 00:18:54.171 Multiple Update Detection Support: N/A 00:18:54.171 Firmware Update Granularity: No Information Provided 00:18:54.171 Per-Namespace SMART Log: Yes 00:18:54.171 Asymmetric Namespace Access Log Page: Not Supported 00:18:54.171 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:18:54.171 Command Effects Log Page: Supported 00:18:54.171 Get Log Page Extended Data: Supported 00:18:54.171 Telemetry Log Pages: Not Supported 00:18:54.171 Persistent Event Log Pages: Not Supported 00:18:54.171 Supported Log Pages Log Page: May Support 00:18:54.171 Commands Supported & Effects Log Page: Not Supported 00:18:54.171 Feature Identifiers & Effects Log Page:May Support 00:18:54.171 NVMe-MI Commands & Effects Log Page: May Support 00:18:54.171 Data Area 4 for Telemetry Log: Not Supported 00:18:54.171 Error Log Page Entries Supported: 1 00:18:54.171 Keep Alive: Not Supported 00:18:54.171 00:18:54.171 NVM Command Set Attributes 00:18:54.171 ========================== 00:18:54.171 Submission Queue Entry Size 00:18:54.171 Max: 64 00:18:54.171 Min: 64 00:18:54.171 Completion Queue Entry Size 00:18:54.171 Max: 16 00:18:54.171 Min: 16 00:18:54.171 Number of Namespaces: 256 00:18:54.171 Compare Command: Supported 00:18:54.171 Write Uncorrectable Command: Not Supported 00:18:54.171 Dataset Management Command: Supported 00:18:54.171 Write Zeroes Command: Supported 00:18:54.171 Set Features Save Field: Supported 00:18:54.171 Reservations: Not Supported 00:18:54.171 Timestamp: Supported 00:18:54.171 Copy: Supported 00:18:54.171 Volatile Write Cache: Present 00:18:54.171 Atomic Write Unit (Normal): 1 00:18:54.171 Atomic Write Unit (PFail): 1 00:18:54.171 Atomic Compare & Write Unit: 1 00:18:54.171 Fused Compare & Write: Not Supported 00:18:54.171 Scatter-Gather List 00:18:54.171 SGL Command Set: Supported 00:18:54.171 SGL Keyed: Not Supported 00:18:54.171 SGL Bit Bucket Descriptor: Not Supported 00:18:54.171 SGL Metadata Pointer: Not Supported 00:18:54.171 Oversized SGL: Not Supported 00:18:54.171 SGL Metadata Address: Not Supported 00:18:54.171 SGL Offset: Not Supported 00:18:54.171 Transport SGL Data Block: Not Supported 00:18:54.171 Replay Protected Memory Block: Not Supported 00:18:54.171 00:18:54.171 Firmware Slot Information 00:18:54.171 ========================= 00:18:54.171 Active slot: 1 00:18:54.171 Slot 1 Firmware Revision: 1.0 00:18:54.171 00:18:54.171 00:18:54.171 Commands Supported and Effects 00:18:54.171 ============================== 00:18:54.171 Admin Commands 00:18:54.171 -------------- 00:18:54.171 Delete I/O Submission Queue (00h): Supported 00:18:54.171 Create I/O Submission Queue (01h): Supported 00:18:54.171 Get Log Page (02h): Supported 00:18:54.171 Delete I/O Completion Queue (04h): Supported 00:18:54.171 Create I/O Completion Queue (05h): Supported 00:18:54.171 Identify (06h): Supported 00:18:54.171 Abort (08h): Supported 00:18:54.171 Set Features (09h): Supported 00:18:54.171 Get Features (0Ah): Supported 00:18:54.171 Asynchronous Event Request (0Ch): Supported 00:18:54.171 Namespace Attachment (15h): Supported NS-Inventory-Change 00:18:54.171 Directive Send (19h): Supported 00:18:54.171 Directive Receive (1Ah): Supported 00:18:54.171 Virtualization Management (1Ch): Supported 00:18:54.171 Doorbell Buffer Config (7Ch): Supported 00:18:54.171 Format NVM (80h): Supported LBA-Change 00:18:54.171 I/O Commands 00:18:54.171 ------------ 00:18:54.171 Flush (00h): Supported LBA-Change 00:18:54.171 Write (01h): Supported LBA-Change 00:18:54.171 Read (02h): Supported 00:18:54.171 Compare (05h): Supported 00:18:54.171 Write Zeroes (08h): Supported LBA-Change 00:18:54.171 Dataset Management (09h): Supported LBA-Change 00:18:54.171 Unknown (0Ch): Supported 00:18:54.171 Unknown (12h): Supported 00:18:54.171 Copy (19h): Supported LBA-Change 00:18:54.171 Unknown (1Dh): Supported LBA-Change 00:18:54.171 00:18:54.171 Error Log 00:18:54.171 ========= 00:18:54.171 00:18:54.171 Arbitration 00:18:54.171 =========== 00:18:54.171 Arbitration Burst: no limit 00:18:54.171 00:18:54.171 Power Management 00:18:54.171 ================ 00:18:54.171 Number of Power States: 1 00:18:54.171 Current Power State: Power State #0 00:18:54.171 Power State #0: 00:18:54.171 Max Power: 25.00 W 00:18:54.171 Non-Operational State: Operational 00:18:54.171 Entry Latency: 16 microseconds 00:18:54.171 Exit Latency: 4 microseconds 00:18:54.171 Relative Read Throughput: 0 00:18:54.171 Relative Read Latency: 0 00:18:54.171 Relative Write Throughput: 0 00:18:54.171 Relative Write Latency: 0 00:18:54.429 Idle Power: Not Reported 00:18:54.429 Active Power: Not Reported 00:18:54.429 Non-Operational Permissive Mode: Not Supported 00:18:54.429 00:18:54.429 Health Information 00:18:54.429 ================== 00:18:54.429 Critical Warnings: 00:18:54.429 Available Spare Space: OK 00:18:54.429 Temperature: OK 00:18:54.429 Device Reliability: OK 00:18:54.429 Read Only: No 00:18:54.429 Volatile Memory Backup: OK 00:18:54.429 Current Temperature: 323 Kelvin (50 Celsius) 00:18:54.429 Temperature Threshold: 343 Kelvin (70 Celsius) 00:18:54.429 Available Spare: 0% 00:18:54.429 Available Spare Threshold: 0% 00:18:54.429 Life Percentage Used: 0% 00:18:54.429 Data Units Read: 13357 00:18:54.429 Data Units Written: 13342 00:18:54.430 Host Read Commands: 288307 00:18:54.430 Host Write Commands: 288166 00:18:54.430 Controller Busy Time: 0 minutes 00:18:54.430 Power Cycles: 0 00:18:54.430 Power On Hours: 0 hours 00:18:54.430 Unsafe Shutdowns: 0 00:18:54.430 Unrecoverable Media Errors: 0 00:18:54.430 Lifetime Error Log Entries: 0 00:18:54.430 Warning Temperature Time: 0 minutes 00:18:54.430 Critical Temperature Time: 0 minutes 00:18:54.430 00:18:54.430 Number of Queues 00:18:54.430 ================ 00:18:54.430 Number of I/O Submission Queues: 64 00:18:54.430 Number of I/O Completion Queues: 64 00:18:54.430 00:18:54.430 ZNS Specific Controller Data 00:18:54.430 ============================ 00:18:54.430 Zone Append Size Limit: 0 00:18:54.430 00:18:54.430 00:18:54.430 Active Namespaces 00:18:54.430 ================= 00:18:54.430 Namespace ID:1 00:18:54.430 Error Recovery Timeout: Unlimited 00:18:54.430 Command Set Identifier: NVM (00h) 00:18:54.430 Deallocate: Supported 00:18:54.430 Deallocated/Unwritten Error: Supported 00:18:54.430 Deallocated Read Value: All 0x00 00:18:54.430 Deallocate in Write Zeroes: Not Supported 00:18:54.430 Deallocated Guard Field: 0xFFFF 00:18:54.430 Flush: Supported 00:18:54.430 Reservation: Not Supported 00:18:54.430 Namespace Sharing Capabilities: Private 00:18:54.430 Size (in LBAs): 1310720 (5GiB) 00:18:54.430 Capacity (in LBAs): 1310720 (5GiB) 00:18:54.430 Utilization (in LBAs): 1310720 (5GiB) 00:18:54.430 Thin Provisioning: Not Supported 00:18:54.430 Per-NS Atomic Units: No 00:18:54.430 Maximum Single Source Range Length: 128 00:18:54.430 Maximum Copy Length: 128 00:18:54.430 Maximum Source Range Count: 128 00:18:54.430 NGUID/EUI64 Never Reused: No 00:18:54.430 Namespace Write Protected: No 00:18:54.430 Number of LBA Formats: 8 00:18:54.430 Current LBA Format: LBA Format #04 00:18:54.430 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:54.430 LBA Format #01: Data Size: 512 Metadata Size: 8 00:18:54.430 LBA Format #02: Data Size: 512 Metadata Size: 16 00:18:54.430 LBA Format #03: Data Size: 512 Metadata Size: 64 00:18:54.430 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:18:54.430 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:18:54.430 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:18:54.430 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:18:54.430 00:18:54.430 NVM Specific Namespace Data 00:18:54.430 =========================== 00:18:54.430 Logical Block Storage Tag Mask: 0 00:18:54.430 Protection Information Capabilities: 00:18:54.430 16b Guard Protection Information Storage Tag Support: No 00:18:54.430 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:18:54.430 Storage Tag Check Read Support: No 00:18:54.430 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.430 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.430 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.430 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.430 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.430 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.430 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.430 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.430 21:18:05 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:18:54.430 21:18:05 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:18:54.997 EAL: TSC is not safe to use in SMP mode 00:18:54.997 EAL: TSC is not invariant 00:18:54.997 [2024-07-14 21:18:06.262074] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:54.997 ===================================================== 00:18:54.997 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:54.997 ===================================================== 00:18:54.997 Controller Capabilities/Features 00:18:54.997 ================================ 00:18:54.997 Vendor ID: 1b36 00:18:54.997 Subsystem Vendor ID: 1af4 00:18:54.997 Serial Number: 12340 00:18:54.997 Model Number: QEMU NVMe Ctrl 00:18:54.997 Firmware Version: 8.0.0 00:18:54.997 Recommended Arb Burst: 6 00:18:54.997 IEEE OUI Identifier: 00 54 52 00:18:54.997 Multi-path I/O 00:18:54.997 May have multiple subsystem ports: No 00:18:54.997 May have multiple controllers: No 00:18:54.997 Associated with SR-IOV VF: No 00:18:54.997 Max Data Transfer Size: 524288 00:18:54.997 Max Number of Namespaces: 256 00:18:54.997 Max Number of I/O Queues: 64 00:18:54.997 NVMe Specification Version (VS): 1.4 00:18:54.997 NVMe Specification Version (Identify): 1.4 00:18:54.997 Maximum Queue Entries: 2048 00:18:54.997 Contiguous Queues Required: Yes 00:18:54.997 Arbitration Mechanisms Supported 00:18:54.997 Weighted Round Robin: Not Supported 00:18:54.997 Vendor Specific: Not Supported 00:18:54.997 Reset Timeout: 7500 ms 00:18:54.997 Doorbell Stride: 4 bytes 00:18:54.997 NVM Subsystem Reset: Not Supported 00:18:54.997 Command Sets Supported 00:18:54.997 NVM Command Set: Supported 00:18:54.997 Boot Partition: Not Supported 00:18:54.997 Memory Page Size Minimum: 4096 bytes 00:18:54.997 Memory Page Size Maximum: 65536 bytes 00:18:54.997 Persistent Memory Region: Not Supported 00:18:54.997 Optional Asynchronous Events Supported 00:18:54.997 Namespace Attribute Notices: Supported 00:18:54.997 Firmware Activation Notices: Not Supported 00:18:54.997 ANA Change Notices: Not Supported 00:18:54.997 PLE Aggregate Log Change Notices: Not Supported 00:18:54.997 LBA Status Info Alert Notices: Not Supported 00:18:54.997 EGE Aggregate Log Change Notices: Not Supported 00:18:54.997 Normal NVM Subsystem Shutdown event: Not Supported 00:18:54.997 Zone Descriptor Change Notices: Not Supported 00:18:54.997 Discovery Log Change Notices: Not Supported 00:18:54.997 Controller Attributes 00:18:54.997 128-bit Host Identifier: Not Supported 00:18:54.997 Non-Operational Permissive Mode: Not Supported 00:18:54.997 NVM Sets: Not Supported 00:18:54.997 Read Recovery Levels: Not Supported 00:18:54.997 Endurance Groups: Not Supported 00:18:54.997 Predictable Latency Mode: Not Supported 00:18:54.997 Traffic Based Keep ALive: Not Supported 00:18:54.997 Namespace Granularity: Not Supported 00:18:54.997 SQ Associations: Not Supported 00:18:54.997 UUID List: Not Supported 00:18:54.997 Multi-Domain Subsystem: Not Supported 00:18:54.997 Fixed Capacity Management: Not Supported 00:18:54.997 Variable Capacity Management: Not Supported 00:18:54.997 Delete Endurance Group: Not Supported 00:18:54.997 Delete NVM Set: Not Supported 00:18:54.997 Extended LBA Formats Supported: Supported 00:18:54.997 Flexible Data Placement Supported: Not Supported 00:18:54.997 00:18:54.998 Controller Memory Buffer Support 00:18:54.998 ================================ 00:18:54.998 Supported: No 00:18:54.998 00:18:54.998 Persistent Memory Region Support 00:18:54.998 ================================ 00:18:54.998 Supported: No 00:18:54.998 00:18:54.998 Admin Command Set Attributes 00:18:54.998 ============================ 00:18:54.998 Security Send/Receive: Not Supported 00:18:54.998 Format NVM: Supported 00:18:54.998 Firmware Activate/Download: Not Supported 00:18:54.998 Namespace Management: Supported 00:18:54.998 Device Self-Test: Not Supported 00:18:54.998 Directives: Supported 00:18:54.998 NVMe-MI: Not Supported 00:18:54.998 Virtualization Management: Not Supported 00:18:54.998 Doorbell Buffer Config: Supported 00:18:54.998 Get LBA Status Capability: Not Supported 00:18:54.998 Command & Feature Lockdown Capability: Not Supported 00:18:54.998 Abort Command Limit: 4 00:18:54.998 Async Event Request Limit: 4 00:18:54.998 Number of Firmware Slots: N/A 00:18:54.998 Firmware Slot 1 Read-Only: N/A 00:18:54.998 Firmware Activation Without Reset: N/A 00:18:54.998 Multiple Update Detection Support: N/A 00:18:54.998 Firmware Update Granularity: No Information Provided 00:18:54.998 Per-Namespace SMART Log: Yes 00:18:54.998 Asymmetric Namespace Access Log Page: Not Supported 00:18:54.998 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:18:54.998 Command Effects Log Page: Supported 00:18:54.998 Get Log Page Extended Data: Supported 00:18:54.998 Telemetry Log Pages: Not Supported 00:18:54.998 Persistent Event Log Pages: Not Supported 00:18:54.998 Supported Log Pages Log Page: May Support 00:18:54.998 Commands Supported & Effects Log Page: Not Supported 00:18:54.998 Feature Identifiers & Effects Log Page:May Support 00:18:54.998 NVMe-MI Commands & Effects Log Page: May Support 00:18:54.998 Data Area 4 for Telemetry Log: Not Supported 00:18:54.998 Error Log Page Entries Supported: 1 00:18:54.998 Keep Alive: Not Supported 00:18:54.998 00:18:54.998 NVM Command Set Attributes 00:18:54.998 ========================== 00:18:54.998 Submission Queue Entry Size 00:18:54.998 Max: 64 00:18:54.998 Min: 64 00:18:54.998 Completion Queue Entry Size 00:18:54.998 Max: 16 00:18:54.998 Min: 16 00:18:54.998 Number of Namespaces: 256 00:18:54.998 Compare Command: Supported 00:18:54.998 Write Uncorrectable Command: Not Supported 00:18:54.998 Dataset Management Command: Supported 00:18:54.998 Write Zeroes Command: Supported 00:18:54.998 Set Features Save Field: Supported 00:18:54.998 Reservations: Not Supported 00:18:54.998 Timestamp: Supported 00:18:54.998 Copy: Supported 00:18:54.998 Volatile Write Cache: Present 00:18:54.998 Atomic Write Unit (Normal): 1 00:18:54.998 Atomic Write Unit (PFail): 1 00:18:54.998 Atomic Compare & Write Unit: 1 00:18:54.998 Fused Compare & Write: Not Supported 00:18:54.998 Scatter-Gather List 00:18:54.998 SGL Command Set: Supported 00:18:54.998 SGL Keyed: Not Supported 00:18:54.998 SGL Bit Bucket Descriptor: Not Supported 00:18:54.998 SGL Metadata Pointer: Not Supported 00:18:54.998 Oversized SGL: Not Supported 00:18:54.998 SGL Metadata Address: Not Supported 00:18:54.998 SGL Offset: Not Supported 00:18:54.998 Transport SGL Data Block: Not Supported 00:18:54.998 Replay Protected Memory Block: Not Supported 00:18:54.998 00:18:54.998 Firmware Slot Information 00:18:54.998 ========================= 00:18:54.998 Active slot: 1 00:18:54.998 Slot 1 Firmware Revision: 1.0 00:18:54.998 00:18:54.998 00:18:54.998 Commands Supported and Effects 00:18:54.998 ============================== 00:18:54.998 Admin Commands 00:18:54.998 -------------- 00:18:54.998 Delete I/O Submission Queue (00h): Supported 00:18:54.998 Create I/O Submission Queue (01h): Supported 00:18:54.998 Get Log Page (02h): Supported 00:18:54.998 Delete I/O Completion Queue (04h): Supported 00:18:54.998 Create I/O Completion Queue (05h): Supported 00:18:54.998 Identify (06h): Supported 00:18:54.998 Abort (08h): Supported 00:18:54.998 Set Features (09h): Supported 00:18:54.998 Get Features (0Ah): Supported 00:18:54.998 Asynchronous Event Request (0Ch): Supported 00:18:54.998 Namespace Attachment (15h): Supported NS-Inventory-Change 00:18:54.998 Directive Send (19h): Supported 00:18:54.998 Directive Receive (1Ah): Supported 00:18:54.998 Virtualization Management (1Ch): Supported 00:18:54.998 Doorbell Buffer Config (7Ch): Supported 00:18:54.998 Format NVM (80h): Supported LBA-Change 00:18:54.998 I/O Commands 00:18:54.998 ------------ 00:18:54.998 Flush (00h): Supported LBA-Change 00:18:54.998 Write (01h): Supported LBA-Change 00:18:54.998 Read (02h): Supported 00:18:54.998 Compare (05h): Supported 00:18:54.998 Write Zeroes (08h): Supported LBA-Change 00:18:54.998 Dataset Management (09h): Supported LBA-Change 00:18:54.998 Unknown (0Ch): Supported 00:18:54.998 Unknown (12h): Supported 00:18:54.998 Copy (19h): Supported LBA-Change 00:18:54.998 Unknown (1Dh): Supported LBA-Change 00:18:54.998 00:18:54.998 Error Log 00:18:54.998 ========= 00:18:54.998 00:18:54.998 Arbitration 00:18:54.998 =========== 00:18:54.998 Arbitration Burst: no limit 00:18:54.998 00:18:54.998 Power Management 00:18:54.998 ================ 00:18:54.998 Number of Power States: 1 00:18:54.998 Current Power State: Power State #0 00:18:54.998 Power State #0: 00:18:54.998 Max Power: 25.00 W 00:18:54.998 Non-Operational State: Operational 00:18:54.998 Entry Latency: 16 microseconds 00:18:54.998 Exit Latency: 4 microseconds 00:18:54.998 Relative Read Throughput: 0 00:18:54.998 Relative Read Latency: 0 00:18:54.998 Relative Write Throughput: 0 00:18:54.998 Relative Write Latency: 0 00:18:54.998 Idle Power: Not Reported 00:18:54.998 Active Power: Not Reported 00:18:54.998 Non-Operational Permissive Mode: Not Supported 00:18:54.998 00:18:54.998 Health Information 00:18:54.998 ================== 00:18:54.998 Critical Warnings: 00:18:54.998 Available Spare Space: OK 00:18:54.998 Temperature: OK 00:18:54.998 Device Reliability: OK 00:18:54.998 Read Only: No 00:18:54.998 Volatile Memory Backup: OK 00:18:54.998 Current Temperature: 323 Kelvin (50 Celsius) 00:18:54.998 Temperature Threshold: 343 Kelvin (70 Celsius) 00:18:54.998 Available Spare: 0% 00:18:54.998 Available Spare Threshold: 0% 00:18:54.998 Life Percentage Used: 0% 00:18:54.998 Data Units Read: 13357 00:18:54.998 Data Units Written: 13342 00:18:54.998 Host Read Commands: 288307 00:18:54.998 Host Write Commands: 288166 00:18:54.998 Controller Busy Time: 0 minutes 00:18:54.998 Power Cycles: 0 00:18:54.998 Power On Hours: 0 hours 00:18:54.998 Unsafe Shutdowns: 0 00:18:54.998 Unrecoverable Media Errors: 0 00:18:54.998 Lifetime Error Log Entries: 0 00:18:54.998 Warning Temperature Time: 0 minutes 00:18:54.998 Critical Temperature Time: 0 minutes 00:18:54.998 00:18:54.998 Number of Queues 00:18:54.998 ================ 00:18:54.998 Number of I/O Submission Queues: 64 00:18:54.998 Number of I/O Completion Queues: 64 00:18:54.998 00:18:54.998 ZNS Specific Controller Data 00:18:54.998 ============================ 00:18:54.998 Zone Append Size Limit: 0 00:18:54.998 00:18:54.998 00:18:54.998 Active Namespaces 00:18:54.998 ================= 00:18:54.998 Namespace ID:1 00:18:54.998 Error Recovery Timeout: Unlimited 00:18:54.998 Command Set Identifier: NVM (00h) 00:18:54.998 Deallocate: Supported 00:18:54.998 Deallocated/Unwritten Error: Supported 00:18:54.998 Deallocated Read Value: All 0x00 00:18:54.998 Deallocate in Write Zeroes: Not Supported 00:18:54.998 Deallocated Guard Field: 0xFFFF 00:18:54.998 Flush: Supported 00:18:54.998 Reservation: Not Supported 00:18:54.998 Namespace Sharing Capabilities: Private 00:18:54.998 Size (in LBAs): 1310720 (5GiB) 00:18:54.998 Capacity (in LBAs): 1310720 (5GiB) 00:18:54.998 Utilization (in LBAs): 1310720 (5GiB) 00:18:54.998 Thin Provisioning: Not Supported 00:18:54.998 Per-NS Atomic Units: No 00:18:54.998 Maximum Single Source Range Length: 128 00:18:54.998 Maximum Copy Length: 128 00:18:54.998 Maximum Source Range Count: 128 00:18:54.998 NGUID/EUI64 Never Reused: No 00:18:54.998 Namespace Write Protected: No 00:18:54.998 Number of LBA Formats: 8 00:18:54.998 Current LBA Format: LBA Format #04 00:18:54.998 LBA Format #00: Data Size: 512 Metadata Size: 0 00:18:54.998 LBA Format #01: Data Size: 512 Metadata Size: 8 00:18:54.998 LBA Format #02: Data Size: 512 Metadata Size: 16 00:18:54.998 LBA Format #03: Data Size: 512 Metadata Size: 64 00:18:54.998 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:18:54.998 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:18:54.998 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:18:54.998 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:18:54.998 00:18:54.998 NVM Specific Namespace Data 00:18:54.998 =========================== 00:18:54.998 Logical Block Storage Tag Mask: 0 00:18:54.998 Protection Information Capabilities: 00:18:54.998 16b Guard Protection Information Storage Tag Support: No 00:18:54.998 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:18:54.999 Storage Tag Check Read Support: No 00:18:54.999 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.999 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.999 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.999 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.999 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.999 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.999 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.999 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:18:54.999 00:18:54.999 real 0m1.208s 00:18:54.999 user 0m0.040s 00:18:54.999 sys 0m1.180s 00:18:54.999 21:18:06 nvme.nvme_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:54.999 21:18:06 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:18:54.999 ************************************ 00:18:54.999 END TEST nvme_identify 00:18:54.999 ************************************ 00:18:54.999 21:18:06 nvme -- common/autotest_common.sh@1142 -- # return 0 00:18:54.999 21:18:06 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:18:54.999 21:18:06 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:54.999 21:18:06 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:54.999 21:18:06 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:54.999 ************************************ 00:18:54.999 START TEST nvme_perf 00:18:54.999 ************************************ 00:18:54.999 21:18:06 nvme.nvme_perf -- common/autotest_common.sh@1123 -- # nvme_perf 00:18:54.999 21:18:06 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:18:55.595 EAL: TSC is not safe to use in SMP mode 00:18:55.595 EAL: TSC is not invariant 00:18:55.595 [2024-07-14 21:18:06.871638] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:56.531 Initializing NVMe Controllers 00:18:56.531 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:56.531 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:56.531 Initialization complete. Launching workers. 00:18:56.531 ======================================================== 00:18:56.531 Latency(us) 00:18:56.531 Device Information : IOPS MiB/s Average min max 00:18:56.531 PCIE (0000:00:10.0) NSID 1 from core 0: 77940.12 913.36 1642.49 252.15 4698.46 00:18:56.531 ======================================================== 00:18:56.531 Total : 77940.12 913.36 1642.49 252.15 4698.46 00:18:56.531 00:18:56.531 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:18:56.531 ================================================================================= 00:18:56.531 1.00000% : 1362.851us 00:18:56.531 10.00000% : 1467.113us 00:18:56.531 25.00000% : 1526.691us 00:18:56.531 50.00000% : 1616.059us 00:18:56.531 75.00000% : 1727.768us 00:18:56.531 90.00000% : 1839.477us 00:18:56.531 95.00000% : 1921.397us 00:18:56.531 98.00000% : 2040.553us 00:18:56.531 99.00000% : 2502.284us 00:18:56.531 99.50000% : 2710.808us 00:18:56.531 99.90000% : 4498.154us 00:18:56.531 99.99000% : 4676.889us 00:18:56.531 99.99900% : 4706.678us 00:18:56.531 99.99990% : 4706.678us 00:18:56.531 99.99999% : 4706.678us 00:18:56.531 00:18:56.531 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:18:56.531 ============================================================================== 00:18:56.531 Range in us Cumulative IO count 00:18:56.531 251.346 - 253.207: 0.0013% ( 1) 00:18:56.531 525.033 - 528.757: 0.0026% ( 1) 00:18:56.531 528.757 - 532.480: 0.0064% ( 3) 00:18:56.531 532.480 - 536.204: 0.0103% ( 3) 00:18:56.531 536.204 - 539.927: 0.0141% ( 3) 00:18:56.531 539.927 - 543.651: 0.0179% ( 3) 00:18:56.531 543.651 - 547.375: 0.0218% ( 3) 00:18:56.531 547.375 - 551.098: 0.0256% ( 3) 00:18:56.531 551.098 - 554.822: 0.0308% ( 4) 00:18:56.531 554.822 - 558.546: 0.0333% ( 2) 00:18:56.531 558.546 - 562.269: 0.0346% ( 1) 00:18:56.531 1057.513 - 1064.960: 0.0359% ( 1) 00:18:56.531 1072.408 - 1079.855: 0.0397% ( 3) 00:18:56.531 1079.855 - 1087.302: 0.0462% ( 5) 00:18:56.531 1087.302 - 1094.749: 0.0526% ( 5) 00:18:56.531 1094.749 - 1102.197: 0.0590% ( 5) 00:18:56.531 1109.644 - 1117.091: 0.0641% ( 4) 00:18:56.531 1117.091 - 1124.539: 0.0705% ( 5) 00:18:56.531 1124.539 - 1131.986: 0.0731% ( 2) 00:18:56.531 1139.433 - 1146.880: 0.0769% ( 3) 00:18:56.531 1146.880 - 1154.328: 0.0846% ( 6) 00:18:56.531 1154.328 - 1161.775: 0.0936% ( 7) 00:18:56.531 1161.775 - 1169.222: 0.1026% ( 7) 00:18:56.531 1169.222 - 1176.669: 0.1090% ( 5) 00:18:56.531 1176.669 - 1184.117: 0.1128% ( 3) 00:18:56.531 1184.117 - 1191.564: 0.1218% ( 7) 00:18:56.531 1191.564 - 1199.011: 0.1321% ( 8) 00:18:56.531 1199.011 - 1206.459: 0.1436% ( 9) 00:18:56.531 1206.459 - 1213.906: 0.1590% ( 12) 00:18:56.531 1213.906 - 1221.353: 0.1718% ( 10) 00:18:56.531 1221.353 - 1228.800: 0.1885% ( 13) 00:18:56.531 1228.800 - 1236.248: 0.2026% ( 11) 00:18:56.531 1236.248 - 1243.695: 0.2154% ( 10) 00:18:56.531 1243.695 - 1251.142: 0.2244% ( 7) 00:18:56.531 1251.142 - 1258.589: 0.2385% ( 11) 00:18:56.531 1258.589 - 1266.037: 0.2513% ( 10) 00:18:56.531 1266.037 - 1273.484: 0.2705% ( 15) 00:18:56.531 1273.484 - 1280.931: 0.2936% ( 18) 00:18:56.531 1280.931 - 1288.379: 0.3116% ( 14) 00:18:56.531 1288.379 - 1295.826: 0.3295% ( 14) 00:18:56.531 1295.826 - 1303.273: 0.3436% ( 11) 00:18:56.531 1303.273 - 1310.720: 0.3731% ( 23) 00:18:56.531 1310.720 - 1318.168: 0.4128% ( 31) 00:18:56.531 1318.168 - 1325.615: 0.4654% ( 41) 00:18:56.531 1325.615 - 1333.062: 0.5513% ( 67) 00:18:56.531 1333.062 - 1340.510: 0.6770% ( 98) 00:18:56.531 1340.510 - 1347.957: 0.8065% ( 101) 00:18:56.531 1347.957 - 1355.404: 0.9988% ( 150) 00:18:56.531 1355.404 - 1362.851: 1.2603% ( 204) 00:18:56.531 1362.851 - 1370.299: 1.5360% ( 215) 00:18:56.531 1370.299 - 1377.746: 1.8527% ( 247) 00:18:56.531 1377.746 - 1385.193: 2.2155% ( 283) 00:18:56.531 1385.193 - 1392.640: 2.6271% ( 321) 00:18:56.531 1392.640 - 1400.088: 3.1220% ( 386) 00:18:56.531 1400.088 - 1407.535: 3.6579% ( 418) 00:18:56.531 1407.535 - 1414.982: 4.3066% ( 506) 00:18:56.531 1414.982 - 1422.430: 5.0272% ( 562) 00:18:56.531 1422.430 - 1429.877: 5.7657% ( 576) 00:18:56.531 1429.877 - 1437.324: 6.5991% ( 650) 00:18:56.531 1437.324 - 1444.771: 7.5504% ( 742) 00:18:56.531 1444.771 - 1452.219: 8.6402% ( 850) 00:18:56.531 1452.219 - 1459.666: 9.8402% ( 936) 00:18:56.531 1459.666 - 1467.113: 11.1326% ( 1008) 00:18:56.531 1467.113 - 1474.560: 12.5494% ( 1105) 00:18:56.531 1474.560 - 1482.008: 14.1276% ( 1231) 00:18:56.531 1482.008 - 1489.455: 15.8431% ( 1338) 00:18:56.531 1489.455 - 1496.902: 17.6560% ( 1414) 00:18:56.531 1496.902 - 1504.350: 19.4523% ( 1401) 00:18:56.531 1504.350 - 1511.797: 21.2921% ( 1435) 00:18:56.531 1511.797 - 1519.244: 23.2640% ( 1538) 00:18:56.531 1519.244 - 1526.691: 25.2692% ( 1564) 00:18:56.531 1526.691 - 1534.139: 27.3322% ( 1609) 00:18:56.531 1534.139 - 1541.586: 29.4451% ( 1648) 00:18:56.531 1541.586 - 1549.033: 31.5696% ( 1657) 00:18:56.531 1549.033 - 1556.480: 33.5812% ( 1569) 00:18:56.531 1556.480 - 1563.928: 35.7236% ( 1671) 00:18:56.531 1563.928 - 1571.375: 37.8455% ( 1655) 00:18:56.531 1571.375 - 1578.822: 40.0572% ( 1725) 00:18:56.531 1578.822 - 1586.270: 42.2740% ( 1729) 00:18:56.531 1586.270 - 1593.717: 44.3612% ( 1628) 00:18:56.531 1593.717 - 1601.164: 46.4588% ( 1636) 00:18:56.531 1601.164 - 1608.611: 48.5845% ( 1658) 00:18:56.531 1608.611 - 1616.059: 50.6539% ( 1614) 00:18:56.531 1616.059 - 1623.506: 52.6681% ( 1571) 00:18:56.531 1623.506 - 1630.953: 54.7002% ( 1585) 00:18:56.531 1630.953 - 1638.401: 56.6067% ( 1487) 00:18:56.531 1638.401 - 1645.848: 58.6274% ( 1576) 00:18:56.531 1645.848 - 1653.295: 60.4505% ( 1422) 00:18:56.531 1653.295 - 1660.742: 62.3609% ( 1490) 00:18:56.531 1660.742 - 1668.190: 64.2071% ( 1440) 00:18:56.531 1668.190 - 1675.637: 65.8277% ( 1264) 00:18:56.531 1675.637 - 1683.084: 67.4983% ( 1303) 00:18:56.531 1683.084 - 1690.531: 69.0292% ( 1194) 00:18:56.531 1690.531 - 1697.979: 70.5293% ( 1170) 00:18:56.531 1697.979 - 1705.426: 71.9806% ( 1132) 00:18:56.531 1705.426 - 1712.873: 73.3871% ( 1097) 00:18:56.531 1712.873 - 1720.321: 74.7077% ( 1030) 00:18:56.531 1720.321 - 1727.768: 75.9975% ( 1006) 00:18:56.531 1727.768 - 1735.215: 77.1860% ( 927) 00:18:56.531 1735.215 - 1742.662: 78.3886% ( 938) 00:18:56.531 1742.662 - 1750.110: 79.5861% ( 934) 00:18:56.531 1750.110 - 1757.557: 80.7464% ( 905) 00:18:56.531 1757.557 - 1765.004: 81.8324% ( 847) 00:18:56.531 1765.004 - 1772.451: 82.9312% ( 857) 00:18:56.531 1772.451 - 1779.899: 83.9902% ( 826) 00:18:56.531 1779.899 - 1787.346: 84.9979% ( 786) 00:18:56.531 1787.346 - 1794.793: 85.9083% ( 710) 00:18:56.531 1794.793 - 1802.241: 86.8006% ( 696) 00:18:56.531 1802.241 - 1809.688: 87.6224% ( 641) 00:18:56.531 1809.688 - 1817.135: 88.4020% ( 608) 00:18:56.531 1817.135 - 1824.582: 89.1559% ( 588) 00:18:56.531 1824.582 - 1832.030: 89.8687% ( 556) 00:18:56.531 1832.030 - 1839.477: 90.5854% ( 559) 00:18:56.531 1839.477 - 1846.924: 91.2572% ( 524) 00:18:56.531 1846.924 - 1854.371: 91.8368% ( 452) 00:18:56.531 1854.371 - 1861.819: 92.3381% ( 391) 00:18:56.531 1861.819 - 1869.266: 92.8753% ( 419) 00:18:56.531 1869.266 - 1876.713: 93.3125% ( 341) 00:18:56.531 1876.713 - 1884.161: 93.7497% ( 341) 00:18:56.531 1884.161 - 1891.608: 94.1728% ( 330) 00:18:56.531 1891.608 - 1899.055: 94.5766% ( 315) 00:18:56.531 1899.055 - 1906.502: 94.9510% ( 292) 00:18:56.531 1906.502 - 1921.397: 95.6152% ( 518) 00:18:56.531 1921.397 - 1936.292: 96.1575% ( 423) 00:18:56.532 1936.292 - 1951.186: 96.6088% ( 352) 00:18:56.532 1951.186 - 1966.081: 97.0024% ( 307) 00:18:56.532 1966.081 - 1980.975: 97.2729% ( 211) 00:18:56.532 1980.975 - 1995.870: 97.5255% ( 197) 00:18:56.532 1995.870 - 2010.764: 97.7601% ( 183) 00:18:56.532 2010.764 - 2025.659: 97.9486% ( 147) 00:18:56.532 2025.659 - 2040.553: 98.0948% ( 114) 00:18:56.532 2040.553 - 2055.448: 98.2409% ( 114) 00:18:56.532 2055.448 - 2070.342: 98.3499% ( 85) 00:18:56.532 2070.342 - 2085.237: 98.4256% ( 59) 00:18:56.532 2085.237 - 2100.132: 98.4871% ( 48) 00:18:56.532 2100.132 - 2115.026: 98.5512% ( 50) 00:18:56.532 2115.026 - 2129.921: 98.5833% ( 25) 00:18:56.532 2129.921 - 2144.815: 98.6281% ( 35) 00:18:56.532 2144.815 - 2159.710: 98.6512% ( 18) 00:18:56.532 2159.710 - 2174.604: 98.6692% ( 14) 00:18:56.532 2174.604 - 2189.499: 98.6935% ( 19) 00:18:56.532 2189.499 - 2204.393: 98.7179% ( 19) 00:18:56.532 2204.393 - 2219.288: 98.7410% ( 18) 00:18:56.532 2219.288 - 2234.183: 98.7679% ( 21) 00:18:56.532 2234.183 - 2249.077: 98.8012% ( 26) 00:18:56.532 2249.077 - 2263.972: 98.8230% ( 17) 00:18:56.532 2263.972 - 2278.866: 98.8461% ( 18) 00:18:56.532 2278.866 - 2293.761: 98.8679% ( 17) 00:18:56.532 2293.761 - 2308.655: 98.8846% ( 13) 00:18:56.532 2308.655 - 2323.550: 98.9051% ( 16) 00:18:56.532 2323.550 - 2338.444: 98.9269% ( 17) 00:18:56.532 2338.444 - 2353.339: 98.9358% ( 7) 00:18:56.532 2353.339 - 2368.233: 98.9384% ( 2) 00:18:56.532 2368.233 - 2383.128: 98.9461% ( 6) 00:18:56.532 2383.128 - 2398.023: 98.9525% ( 5) 00:18:56.532 2398.023 - 2412.917: 98.9589% ( 5) 00:18:56.532 2412.917 - 2427.812: 98.9602% ( 1) 00:18:56.532 2457.601 - 2472.495: 98.9730% ( 10) 00:18:56.532 2472.495 - 2487.390: 98.9999% ( 21) 00:18:56.532 2487.390 - 2502.284: 99.0371% ( 29) 00:18:56.532 2502.284 - 2517.179: 99.0756% ( 30) 00:18:56.532 2517.179 - 2532.074: 99.1038% ( 22) 00:18:56.532 2532.074 - 2546.968: 99.1384% ( 27) 00:18:56.532 2546.968 - 2561.863: 99.1730% ( 27) 00:18:56.532 2561.863 - 2576.757: 99.2077% ( 27) 00:18:56.532 2576.757 - 2591.652: 99.2307% ( 18) 00:18:56.532 2591.652 - 2606.546: 99.2615% ( 24) 00:18:56.532 2606.546 - 2621.441: 99.2782% ( 13) 00:18:56.532 2621.441 - 2636.335: 99.3089% ( 24) 00:18:56.532 2636.335 - 2651.230: 99.3384% ( 23) 00:18:56.532 2651.230 - 2666.124: 99.3679% ( 23) 00:18:56.532 2666.124 - 2681.019: 99.4077% ( 31) 00:18:56.532 2681.019 - 2695.914: 99.4551% ( 37) 00:18:56.532 2695.914 - 2710.808: 99.5025% ( 37) 00:18:56.532 2710.808 - 2725.703: 99.5269% ( 19) 00:18:56.532 2725.703 - 2740.597: 99.5500% ( 18) 00:18:56.532 2740.597 - 2755.492: 99.5705% ( 16) 00:18:56.532 2755.492 - 2770.386: 99.5936% ( 18) 00:18:56.532 2770.386 - 2785.281: 99.6115% ( 14) 00:18:56.532 2785.281 - 2800.175: 99.6372% ( 20) 00:18:56.532 2800.175 - 2815.070: 99.6641% ( 21) 00:18:56.532 2815.070 - 2829.965: 99.6974% ( 26) 00:18:56.532 2829.965 - 2844.859: 99.7141% ( 13) 00:18:56.532 2844.859 - 2859.754: 99.7333% ( 15) 00:18:56.532 2859.754 - 2874.648: 99.7449% ( 9) 00:18:56.532 2874.648 - 2889.543: 99.7615% ( 13) 00:18:56.532 2889.543 - 2904.437: 99.7641% ( 2) 00:18:56.532 2904.437 - 2919.332: 99.7756% ( 9) 00:18:56.532 2919.332 - 2934.226: 99.7820% ( 5) 00:18:56.532 2934.226 - 2949.121: 99.7949% ( 10) 00:18:56.532 2949.121 - 2964.015: 99.7987% ( 3) 00:18:56.532 2964.015 - 2978.910: 99.8141% ( 12) 00:18:56.532 2978.910 - 2993.805: 99.8269% ( 10) 00:18:56.532 3038.488 - 3053.383: 99.8282% ( 1) 00:18:56.532 3053.383 - 3068.277: 99.8308% ( 2) 00:18:56.532 3068.277 - 3083.172: 99.8333% ( 2) 00:18:56.532 3410.852 - 3425.747: 99.8346% ( 1) 00:18:56.532 3559.797 - 3574.692: 99.8359% ( 1) 00:18:56.532 4200.263 - 4230.052: 99.8385% ( 2) 00:18:56.532 4230.052 - 4259.841: 99.8564% ( 14) 00:18:56.532 4259.841 - 4289.630: 99.8718% ( 12) 00:18:56.532 4289.630 - 4319.420: 99.8744% ( 2) 00:18:56.532 4408.787 - 4438.576: 99.8885% ( 11) 00:18:56.532 4438.576 - 4468.365: 99.9000% ( 9) 00:18:56.532 4468.365 - 4498.154: 99.9167% ( 13) 00:18:56.532 4498.154 - 4527.943: 99.9269% ( 8) 00:18:56.532 4527.943 - 4557.732: 99.9410% ( 11) 00:18:56.532 4557.732 - 4587.521: 99.9564% ( 12) 00:18:56.532 4587.521 - 4617.311: 99.9718% ( 12) 00:18:56.532 4617.311 - 4647.100: 99.9782% ( 5) 00:18:56.532 4647.100 - 4676.889: 99.9923% ( 11) 00:18:56.532 4676.889 - 4706.678: 100.0000% ( 6) 00:18:56.532 00:18:56.532 21:18:07 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:18:57.098 EAL: TSC is not safe to use in SMP mode 00:18:57.098 EAL: TSC is not invariant 00:18:57.098 [2024-07-14 21:18:08.488081] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:58.030 Initializing NVMe Controllers 00:18:58.030 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:18:58.030 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:18:58.030 Initialization complete. Launching workers. 00:18:58.030 ======================================================== 00:18:58.030 Latency(us) 00:18:58.030 Device Information : IOPS MiB/s Average min max 00:18:58.030 PCIE (0000:00:10.0) NSID 1 from core 0: 72736.34 852.38 1760.84 574.46 11146.88 00:18:58.030 ======================================================== 00:18:58.030 Total : 72736.34 852.38 1760.84 574.46 11146.88 00:18:58.030 00:18:58.030 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:18:58.030 ================================================================================= 00:18:58.030 1.00000% : 1131.986us 00:18:58.030 10.00000% : 1489.455us 00:18:58.030 25.00000% : 1638.401us 00:18:58.030 50.00000% : 1742.662us 00:18:58.030 75.00000% : 1839.477us 00:18:58.030 90.00000% : 1980.975us 00:18:58.030 95.00000% : 2219.288us 00:18:58.030 98.00000% : 2502.284us 00:18:58.030 99.00000% : 2785.281us 00:18:58.030 99.50000% : 3023.594us 00:18:58.030 99.90000% : 8281.370us 00:18:58.030 99.99000% : 10426.185us 00:18:58.030 99.99900% : 11200.702us 00:18:58.030 99.99990% : 11200.702us 00:18:58.030 99.99999% : 11200.702us 00:18:58.030 00:18:58.030 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:18:58.030 ============================================================================== 00:18:58.030 Range in us Cumulative IO count 00:18:58.030 573.440 - 577.164: 0.0014% ( 1) 00:18:58.030 726.109 - 729.833: 0.0041% ( 2) 00:18:58.030 793.135 - 796.858: 0.0110% ( 5) 00:18:58.030 796.858 - 800.582: 0.0247% ( 10) 00:18:58.030 800.582 - 804.306: 0.0261% ( 1) 00:18:58.030 845.266 - 848.989: 0.0275% ( 1) 00:18:58.030 848.989 - 852.713: 0.0330% ( 4) 00:18:58.030 852.713 - 856.437: 0.0371% ( 3) 00:18:58.030 856.437 - 860.160: 0.0440% ( 5) 00:18:58.030 860.160 - 863.884: 0.0467% ( 2) 00:18:58.030 863.884 - 867.608: 0.0509% ( 3) 00:18:58.030 867.608 - 871.331: 0.0577% ( 5) 00:18:58.030 871.331 - 875.055: 0.0605% ( 2) 00:18:58.030 875.055 - 878.778: 0.0632% ( 2) 00:18:58.030 878.778 - 882.502: 0.0674% ( 3) 00:18:58.030 904.844 - 908.568: 0.0687% ( 1) 00:18:58.030 908.568 - 912.291: 0.0701% ( 1) 00:18:58.030 912.291 - 916.015: 0.0742% ( 3) 00:18:58.030 916.015 - 919.738: 0.0866% ( 9) 00:18:58.030 938.357 - 942.080: 0.0880% ( 1) 00:18:58.030 945.804 - 949.528: 0.0894% ( 1) 00:18:58.030 949.528 - 953.251: 0.0907% ( 1) 00:18:58.030 968.146 - 975.593: 0.0935% ( 2) 00:18:58.030 975.593 - 983.040: 0.1141% ( 15) 00:18:58.030 983.040 - 990.488: 0.1320% ( 13) 00:18:58.030 990.488 - 997.935: 0.1663% ( 25) 00:18:58.030 997.935 - 1005.382: 0.2062% ( 29) 00:18:58.030 1005.382 - 1012.829: 0.2406% ( 25) 00:18:58.030 1012.829 - 1020.277: 0.2818% ( 30) 00:18:58.030 1020.277 - 1027.724: 0.3189% ( 27) 00:18:58.030 1027.724 - 1035.171: 0.3643% ( 33) 00:18:58.030 1035.171 - 1042.619: 0.4000% ( 26) 00:18:58.030 1042.619 - 1050.066: 0.4440% ( 32) 00:18:58.030 1050.066 - 1057.513: 0.4853% ( 30) 00:18:58.031 1057.513 - 1064.960: 0.5361% ( 37) 00:18:58.031 1064.960 - 1072.408: 0.5636% ( 20) 00:18:58.031 1072.408 - 1079.855: 0.5801% ( 12) 00:18:58.031 1079.855 - 1087.302: 0.6420% ( 45) 00:18:58.031 1087.302 - 1094.749: 0.6818% ( 29) 00:18:58.031 1094.749 - 1102.197: 0.7519% ( 51) 00:18:58.031 1102.197 - 1109.644: 0.7959% ( 32) 00:18:58.031 1109.644 - 1117.091: 0.8523% ( 41) 00:18:58.031 1117.091 - 1124.539: 0.9100% ( 42) 00:18:58.031 1124.539 - 1131.986: 1.0104% ( 73) 00:18:58.031 1131.986 - 1139.433: 1.1135% ( 75) 00:18:58.031 1139.433 - 1146.880: 1.1685% ( 40) 00:18:58.031 1146.880 - 1154.328: 1.2317% ( 46) 00:18:58.031 1154.328 - 1161.775: 1.3417% ( 80) 00:18:58.031 1161.775 - 1169.222: 1.4035% ( 45) 00:18:58.031 1169.222 - 1176.669: 1.4585% ( 40) 00:18:58.031 1176.669 - 1184.117: 1.6097% ( 110) 00:18:58.031 1184.117 - 1191.564: 1.6867% ( 56) 00:18:58.031 1191.564 - 1199.011: 1.7774% ( 66) 00:18:58.031 1199.011 - 1206.459: 1.9135% ( 99) 00:18:58.031 1206.459 - 1213.906: 2.0373% ( 90) 00:18:58.031 1213.906 - 1221.353: 2.1596% ( 89) 00:18:58.031 1221.353 - 1228.800: 2.2764% ( 85) 00:18:58.031 1228.800 - 1236.248: 2.3699% ( 68) 00:18:58.031 1236.248 - 1243.695: 2.4895% ( 87) 00:18:58.031 1243.695 - 1251.142: 2.6888% ( 145) 00:18:58.031 1251.142 - 1258.589: 2.8098% ( 88) 00:18:58.031 1258.589 - 1266.037: 2.9198% ( 80) 00:18:58.031 1266.037 - 1273.484: 3.0655% ( 106) 00:18:58.031 1273.484 - 1280.931: 3.1837% ( 86) 00:18:58.031 1280.931 - 1288.379: 3.3281% ( 105) 00:18:58.031 1288.379 - 1295.826: 3.4848% ( 114) 00:18:58.031 1295.826 - 1303.273: 3.6965% ( 154) 00:18:58.031 1303.273 - 1310.720: 3.8656% ( 123) 00:18:58.031 1310.720 - 1318.168: 3.9797% ( 83) 00:18:58.031 1318.168 - 1325.615: 4.2106% ( 168) 00:18:58.031 1325.615 - 1333.062: 4.4031% ( 140) 00:18:58.031 1333.062 - 1340.510: 4.6093% ( 150) 00:18:58.031 1340.510 - 1347.957: 4.7852% ( 128) 00:18:58.031 1347.957 - 1355.404: 5.0299% ( 178) 00:18:58.031 1355.404 - 1362.851: 5.2663% ( 172) 00:18:58.031 1362.851 - 1370.299: 5.4505% ( 134) 00:18:58.031 1370.299 - 1377.746: 5.6884% ( 173) 00:18:58.031 1377.746 - 1385.193: 5.8918% ( 148) 00:18:58.031 1385.193 - 1392.640: 6.0444% ( 111) 00:18:58.031 1392.640 - 1400.088: 6.2987% ( 185) 00:18:58.031 1400.088 - 1407.535: 6.5173% ( 159) 00:18:58.031 1407.535 - 1414.982: 6.7881% ( 197) 00:18:58.031 1414.982 - 1422.430: 6.9888% ( 146) 00:18:58.031 1422.430 - 1429.877: 7.1950% ( 150) 00:18:58.031 1429.877 - 1437.324: 7.4851% ( 211) 00:18:58.031 1437.324 - 1444.771: 7.8576% ( 271) 00:18:58.031 1444.771 - 1452.219: 8.1353% ( 202) 00:18:58.031 1452.219 - 1459.666: 8.5587% ( 308) 00:18:58.031 1459.666 - 1467.113: 8.9230% ( 265) 00:18:58.031 1467.113 - 1474.560: 9.2213% ( 217) 00:18:58.031 1474.560 - 1482.008: 9.6309% ( 298) 00:18:58.031 1482.008 - 1489.455: 10.0337% ( 293) 00:18:58.031 1489.455 - 1496.902: 10.4585% ( 309) 00:18:58.031 1496.902 - 1504.350: 10.8612% ( 293) 00:18:58.031 1504.350 - 1511.797: 11.3025% ( 321) 00:18:58.031 1511.797 - 1519.244: 11.8235% ( 379) 00:18:58.031 1519.244 - 1526.691: 12.3142% ( 357) 00:18:58.031 1526.691 - 1534.139: 12.7981% ( 352) 00:18:58.031 1534.139 - 1541.586: 13.3329% ( 389) 00:18:58.031 1541.586 - 1549.033: 13.9116% ( 421) 00:18:58.031 1549.033 - 1556.480: 14.5934% ( 496) 00:18:58.031 1556.480 - 1563.928: 15.1708% ( 420) 00:18:58.031 1563.928 - 1571.375: 15.8210% ( 473) 00:18:58.031 1571.375 - 1578.822: 16.4712% ( 473) 00:18:58.031 1578.822 - 1586.270: 17.2630% ( 576) 00:18:58.031 1586.270 - 1593.717: 18.2322% ( 705) 00:18:58.031 1593.717 - 1601.164: 19.0913% ( 625) 00:18:58.031 1601.164 - 1608.611: 20.0522% ( 699) 00:18:58.031 1608.611 - 1616.059: 21.1588% ( 805) 00:18:58.031 1616.059 - 1623.506: 22.4263% ( 922) 00:18:58.031 1623.506 - 1630.953: 23.8642% ( 1046) 00:18:58.031 1630.953 - 1638.401: 25.3131% ( 1054) 00:18:58.031 1638.401 - 1645.848: 26.6163% ( 948) 00:18:58.031 1645.848 - 1653.295: 28.1614% ( 1124) 00:18:58.031 1653.295 - 1660.742: 29.9196% ( 1279) 00:18:58.031 1660.742 - 1668.190: 31.6819% ( 1282) 00:18:58.031 1668.190 - 1675.637: 33.2999% ( 1177) 00:18:58.031 1675.637 - 1683.084: 35.0457% ( 1270) 00:18:58.031 1683.084 - 1690.531: 36.8754% ( 1331) 00:18:58.031 1690.531 - 1697.979: 38.7174% ( 1340) 00:18:58.031 1697.979 - 1705.426: 40.5485% ( 1332) 00:18:58.031 1705.426 - 1712.873: 42.2545% ( 1241) 00:18:58.031 1712.873 - 1720.321: 44.3137% ( 1498) 00:18:58.031 1720.321 - 1727.768: 46.2850% ( 1434) 00:18:58.031 1727.768 - 1735.215: 48.3800% ( 1524) 00:18:58.031 1735.215 - 1742.662: 50.4062% ( 1474) 00:18:58.031 1742.662 - 1750.110: 52.7500% ( 1705) 00:18:58.031 1750.110 - 1757.557: 55.0430% ( 1668) 00:18:58.031 1757.557 - 1765.004: 57.2191% ( 1583) 00:18:58.031 1765.004 - 1772.451: 59.4694% ( 1637) 00:18:58.031 1772.451 - 1779.899: 61.4901% ( 1470) 00:18:58.031 1779.899 - 1787.346: 63.4862% ( 1452) 00:18:58.031 1787.346 - 1794.793: 65.4808% ( 1451) 00:18:58.031 1794.793 - 1802.241: 67.4452% ( 1429) 00:18:58.031 1802.241 - 1809.688: 69.5718% ( 1547) 00:18:58.031 1809.688 - 1817.135: 71.3011% ( 1258) 00:18:58.031 1817.135 - 1824.582: 72.8985% ( 1162) 00:18:58.031 1824.582 - 1832.030: 74.7213% ( 1326) 00:18:58.031 1832.030 - 1839.477: 76.4272% ( 1241) 00:18:58.031 1839.477 - 1846.924: 77.8844% ( 1060) 00:18:58.031 1846.924 - 1854.371: 79.3030% ( 1032) 00:18:58.031 1854.371 - 1861.819: 80.5994% ( 943) 00:18:58.031 1861.819 - 1869.266: 81.7101% ( 808) 00:18:58.031 1869.266 - 1876.713: 82.6256% ( 666) 00:18:58.031 1876.713 - 1884.161: 83.5191% ( 650) 00:18:58.031 1884.161 - 1891.608: 84.2711% ( 547) 00:18:58.031 1891.608 - 1899.055: 85.1179% ( 616) 00:18:58.031 1899.055 - 1906.502: 85.8822% ( 556) 00:18:58.031 1906.502 - 1921.397: 87.2816% ( 1018) 00:18:58.031 1921.397 - 1936.292: 88.3731% ( 794) 00:18:58.031 1936.292 - 1951.186: 89.1374% ( 556) 00:18:58.031 1951.186 - 1966.081: 89.8124% ( 491) 00:18:58.031 1966.081 - 1980.975: 90.3430% ( 386) 00:18:58.031 1980.975 - 1995.870: 90.8172% ( 345) 00:18:58.031 1995.870 - 2010.764: 91.1609% ( 250) 00:18:58.031 2010.764 - 2025.659: 91.5513% ( 284) 00:18:58.031 2025.659 - 2040.553: 91.9610% ( 298) 00:18:58.031 2040.553 - 2055.448: 92.4132% ( 329) 00:18:58.031 2055.448 - 2070.342: 92.6634% ( 182) 00:18:58.031 2070.342 - 2085.237: 92.9686% ( 222) 00:18:58.031 2085.237 - 2100.132: 93.2215% ( 184) 00:18:58.031 2100.132 - 2115.026: 93.4140% ( 140) 00:18:58.031 2115.026 - 2129.921: 93.5789% ( 120) 00:18:58.031 2129.921 - 2144.815: 93.7274% ( 108) 00:18:58.031 2144.815 - 2159.710: 94.0065% ( 203) 00:18:58.031 2159.710 - 2174.604: 94.3749% ( 268) 00:18:58.031 2174.604 - 2189.499: 94.6663% ( 212) 00:18:58.031 2189.499 - 2204.393: 94.8464% ( 131) 00:18:58.031 2204.393 - 2219.288: 95.0347% ( 137) 00:18:58.031 2219.288 - 2234.183: 95.1639% ( 94) 00:18:58.031 2234.183 - 2249.077: 95.2904% ( 92) 00:18:58.031 2249.077 - 2263.972: 95.4031% ( 82) 00:18:58.031 2263.972 - 2278.866: 95.5585% ( 113) 00:18:58.031 2278.866 - 2293.761: 95.6767% ( 86) 00:18:58.031 2293.761 - 2308.655: 95.8416% ( 120) 00:18:58.031 2308.655 - 2323.550: 96.0850% ( 177) 00:18:58.031 2323.550 - 2338.444: 96.3022% ( 158) 00:18:58.031 2338.444 - 2353.339: 96.4644% ( 118) 00:18:58.031 2353.339 - 2368.233: 96.6362% ( 125) 00:18:58.031 2368.233 - 2383.128: 96.7750% ( 101) 00:18:58.031 2383.128 - 2398.023: 96.9537% ( 130) 00:18:58.031 2398.023 - 2412.917: 97.1434% ( 138) 00:18:58.031 2412.917 - 2427.812: 97.3785% ( 171) 00:18:58.031 2427.812 - 2442.706: 97.5242% ( 106) 00:18:58.031 2442.706 - 2457.601: 97.6081% ( 61) 00:18:58.031 2457.601 - 2472.495: 97.7552% ( 107) 00:18:58.031 2472.495 - 2487.390: 97.8803% ( 91) 00:18:58.031 2487.390 - 2502.284: 98.0026% ( 89) 00:18:58.031 2502.284 - 2517.179: 98.1043% ( 74) 00:18:58.031 2517.179 - 2532.074: 98.2102% ( 77) 00:18:58.031 2532.074 - 2546.968: 98.3298% ( 87) 00:18:58.031 2546.968 - 2561.863: 98.4191% ( 65) 00:18:58.031 2561.863 - 2576.757: 98.4769% ( 42) 00:18:58.031 2576.757 - 2591.652: 98.5264% ( 36) 00:18:58.031 2591.652 - 2606.546: 98.5868% ( 44) 00:18:58.031 2606.546 - 2621.441: 98.6226% ( 26) 00:18:58.031 2621.441 - 2636.335: 98.6528% ( 22) 00:18:58.031 2636.335 - 2651.230: 98.6734% ( 15) 00:18:58.031 2651.230 - 2666.124: 98.7243% ( 37) 00:18:58.031 2666.124 - 2681.019: 98.7683% ( 32) 00:18:58.031 2681.019 - 2695.914: 98.7903% ( 16) 00:18:58.031 2695.914 - 2710.808: 98.8412% ( 37) 00:18:58.031 2710.808 - 2725.703: 98.8687% ( 20) 00:18:58.031 2725.703 - 2740.597: 98.9168% ( 35) 00:18:58.031 2740.597 - 2755.492: 98.9608% ( 32) 00:18:58.031 2755.492 - 2770.386: 98.9965% ( 26) 00:18:58.031 2770.386 - 2785.281: 99.0529% ( 41) 00:18:58.031 2785.281 - 2800.175: 99.1051% ( 38) 00:18:58.031 2800.175 - 2815.070: 99.1367% ( 23) 00:18:58.031 2815.070 - 2829.965: 99.1615% ( 18) 00:18:58.031 2829.965 - 2844.859: 99.1986% ( 27) 00:18:58.031 2844.859 - 2859.754: 99.2288% ( 22) 00:18:58.031 2859.754 - 2874.648: 99.2508% ( 16) 00:18:58.031 2874.648 - 2889.543: 99.2811% ( 22) 00:18:58.031 2889.543 - 2904.437: 99.2824% ( 1) 00:18:58.031 2904.437 - 2919.332: 99.2879% ( 4) 00:18:58.031 2919.332 - 2934.226: 99.2989% ( 8) 00:18:58.031 2934.226 - 2949.121: 99.3099% ( 8) 00:18:58.031 2949.121 - 2964.015: 99.3415% ( 23) 00:18:58.031 2964.015 - 2978.910: 99.3828% ( 30) 00:18:58.031 2978.910 - 2993.805: 99.4378% ( 40) 00:18:58.031 2993.805 - 3008.699: 99.4708% ( 24) 00:18:58.031 3008.699 - 3023.594: 99.5010% ( 22) 00:18:58.031 3023.594 - 3038.488: 99.5409% ( 29) 00:18:58.031 3038.488 - 3053.383: 99.5601% ( 14) 00:18:58.031 3053.383 - 3068.277: 99.5862% ( 19) 00:18:58.031 3068.277 - 3083.172: 99.6055% ( 14) 00:18:58.031 3083.172 - 3098.066: 99.6178% ( 9) 00:18:58.031 3098.066 - 3112.961: 99.6247% ( 5) 00:18:58.031 3112.961 - 3127.856: 99.6275% ( 2) 00:18:58.031 3127.856 - 3142.750: 99.6316% ( 3) 00:18:58.595 3142.750 - 3157.645: 99.6426% ( 8) 00:18:58.595 3157.645 - 3172.539: 99.6522% ( 7) 00:18:58.595 3172.539 - 3187.434: 99.6536% ( 1) 00:18:58.595 3187.434 - 3202.328: 99.6934% ( 29) 00:18:58.595 3202.328 - 3217.223: 99.7099% ( 12) 00:18:58.595 3217.223 - 3232.117: 99.7251% ( 11) 00:18:58.595 3232.117 - 3247.012: 99.7278% ( 2) 00:18:58.595 3261.906 - 3276.801: 99.7306% ( 2) 00:18:58.595 3291.696 - 3306.590: 99.7319% ( 1) 00:18:58.595 3306.590 - 3321.485: 99.7361% ( 3) 00:18:58.595 3321.485 - 3336.379: 99.7416% ( 4) 00:18:58.595 3336.379 - 3351.274: 99.7471% ( 4) 00:18:58.595 3351.274 - 3366.168: 99.7484% ( 1) 00:18:58.595 3366.168 - 3381.063: 99.7512% ( 2) 00:18:58.595 3381.063 - 3395.957: 99.7553% ( 3) 00:18:58.595 3395.957 - 3410.852: 99.7608% ( 4) 00:18:58.595 3410.852 - 3425.747: 99.7663% ( 4) 00:18:58.595 3425.747 - 3440.641: 99.7704% ( 3) 00:18:58.595 3440.641 - 3455.536: 99.7732% ( 2) 00:18:58.595 3455.536 - 3470.430: 99.7801% ( 5) 00:18:58.595 3470.430 - 3485.325: 99.7842% ( 3) 00:18:58.595 3485.325 - 3500.219: 99.7856% ( 1) 00:18:58.595 3500.219 - 3515.114: 99.7911% ( 4) 00:18:58.595 3515.114 - 3530.008: 99.8007% ( 7) 00:18:58.595 3530.008 - 3544.903: 99.8089% ( 6) 00:18:58.595 3544.903 - 3559.797: 99.8130% ( 3) 00:18:58.595 3559.797 - 3574.692: 99.8172% ( 3) 00:18:58.595 3574.692 - 3589.587: 99.8240% ( 5) 00:18:58.595 3589.587 - 3604.481: 99.8282% ( 3) 00:18:58.595 3604.481 - 3619.376: 99.8309% ( 2) 00:18:58.595 3783.216 - 3798.110: 99.8323% ( 1) 00:18:58.595 3813.005 - 3842.794: 99.8350% ( 2) 00:18:58.595 3842.794 - 3872.583: 99.8364% ( 1) 00:18:58.595 3961.950 - 3991.739: 99.8378% ( 1) 00:18:58.595 4051.318 - 4081.107: 99.8392% ( 1) 00:18:58.595 4200.263 - 4230.052: 99.8419% ( 2) 00:18:58.595 4944.991 - 4974.780: 99.8447% ( 2) 00:18:58.595 5064.147 - 5093.936: 99.8460% ( 1) 00:18:58.595 5093.936 - 5123.725: 99.8474% ( 1) 00:18:58.595 5153.514 - 5183.303: 99.8488% ( 1) 00:18:58.595 5183.303 - 5213.093: 99.8502% ( 1) 00:18:58.595 5302.460 - 5332.249: 99.8515% ( 1) 00:18:58.595 5332.249 - 5362.038: 99.8529% ( 1) 00:18:58.595 5570.562 - 5600.351: 99.8557% ( 2) 00:18:58.595 5659.929 - 5689.718: 99.8612% ( 4) 00:18:58.595 5898.242 - 5928.031: 99.8625% ( 1) 00:18:58.595 6017.398 - 6047.187: 99.8639% ( 1) 00:18:58.595 6196.133 - 6225.922: 99.8653% ( 1) 00:18:58.595 6523.813 - 6553.602: 99.8667% ( 1) 00:18:58.595 6642.969 - 6672.758: 99.8680% ( 1) 00:18:58.595 6672.758 - 6702.548: 99.8694% ( 1) 00:18:58.595 6732.337 - 6762.126: 99.8708% ( 1) 00:18:58.595 6762.126 - 6791.915: 99.8722% ( 1) 00:18:58.595 6881.282 - 6911.071: 99.8735% ( 1) 00:18:58.595 7149.384 - 7179.173: 99.8763% ( 2) 00:18:58.595 7179.173 - 7208.962: 99.8790% ( 2) 00:18:58.595 7417.486 - 7447.275: 99.8804% ( 1) 00:18:58.595 7477.064 - 7506.853: 99.8818% ( 1) 00:18:58.595 7536.642 - 7566.431: 99.8832% ( 1) 00:18:58.595 7685.588 - 7745.166: 99.8845% ( 1) 00:18:58.595 7745.166 - 7804.744: 99.8859% ( 1) 00:18:58.595 7804.744 - 7864.322: 99.8887% ( 2) 00:18:58.595 7983.479 - 8043.057: 99.8900% ( 1) 00:18:58.595 8043.057 - 8102.635: 99.8928% ( 2) 00:18:58.595 8102.635 - 8162.213: 99.8955% ( 2) 00:18:58.595 8221.792 - 8281.370: 99.9010% ( 4) 00:18:58.595 8400.526 - 8460.104: 99.9038% ( 2) 00:18:58.595 8638.839 - 8698.417: 99.9065% ( 2) 00:18:58.595 8757.995 - 8817.574: 99.9079% ( 1) 00:18:58.595 8877.152 - 8936.730: 99.9106% ( 2) 00:18:58.595 8996.308 - 9055.886: 99.9134% ( 2) 00:18:58.595 9055.886 - 9115.465: 99.9148% ( 1) 00:18:58.595 9115.465 - 9175.043: 99.9161% ( 1) 00:18:58.595 9234.621 - 9294.199: 99.9189% ( 2) 00:18:58.595 9294.199 - 9353.777: 99.9203% ( 1) 00:18:58.595 9413.356 - 9472.934: 99.9285% ( 6) 00:18:58.595 9472.934 - 9532.512: 99.9299% ( 1) 00:18:58.595 9532.512 - 9592.090: 99.9354% ( 4) 00:18:58.595 9651.668 - 9711.247: 99.9368% ( 1) 00:18:58.595 9711.247 - 9770.825: 99.9546% ( 13) 00:18:58.595 9770.825 - 9830.403: 99.9656% ( 8) 00:18:58.595 9830.403 - 9889.981: 99.9780% ( 9) 00:18:58.595 10068.716 - 10128.294: 99.9808% ( 2) 00:18:58.595 10247.450 - 10307.029: 99.9821% ( 1) 00:18:58.595 10307.029 - 10366.607: 99.9863% ( 3) 00:18:58.595 10366.607 - 10426.185: 99.9931% ( 5) 00:18:58.595 10664.498 - 10724.076: 99.9945% ( 1) 00:18:58.595 10783.654 - 10843.232: 99.9959% ( 1) 00:18:58.595 11021.967 - 11081.545: 99.9973% ( 1) 00:18:58.595 11081.545 - 11141.123: 99.9986% ( 1) 00:18:58.595 11141.123 - 11200.702: 100.0000% ( 1) 00:18:58.595 00:18:58.595 21:18:10 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:18:58.595 00:18:58.595 real 0m3.678s 00:18:58.595 user 0m2.555s 00:18:58.595 sys 0m1.119s 00:18:58.595 21:18:10 nvme.nvme_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:58.595 21:18:10 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:18:58.595 ************************************ 00:18:58.595 END TEST nvme_perf 00:18:58.595 ************************************ 00:18:58.595 21:18:10 nvme -- common/autotest_common.sh@1142 -- # return 0 00:18:58.595 21:18:10 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:18:58.595 21:18:10 nvme -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:58.595 21:18:10 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:58.595 21:18:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:58.595 ************************************ 00:18:58.595 START TEST nvme_hello_world 00:18:58.595 ************************************ 00:18:58.595 21:18:10 nvme.nvme_hello_world -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:18:59.161 EAL: TSC is not safe to use in SMP mode 00:18:59.161 EAL: TSC is not invariant 00:18:59.161 [2024-07-14 21:18:10.673844] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:59.161 Initializing NVMe Controllers 00:18:59.161 Attaching to 0000:00:10.0 00:18:59.161 Attached to 0000:00:10.0 00:18:59.161 Namespace ID: 1 size: 5GB 00:18:59.161 Initialization complete. 00:18:59.161 INFO: using host memory buffer for IO 00:18:59.161 Hello world! 00:18:59.419 00:18:59.419 real 0m0.642s 00:18:59.419 user 0m0.004s 00:18:59.419 sys 0m0.638s 00:18:59.419 ************************************ 00:18:59.419 END TEST nvme_hello_world 00:18:59.419 ************************************ 00:18:59.419 21:18:10 nvme.nvme_hello_world -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:59.419 21:18:10 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:18:59.419 21:18:10 nvme -- common/autotest_common.sh@1142 -- # return 0 00:18:59.419 21:18:10 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:18:59.419 21:18:10 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:59.419 21:18:10 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.419 21:18:10 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:59.419 ************************************ 00:18:59.419 START TEST nvme_sgl 00:18:59.419 ************************************ 00:18:59.419 21:18:10 nvme.nvme_sgl -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:18:59.985 EAL: TSC is not safe to use in SMP mode 00:18:59.985 EAL: TSC is not invariant 00:18:59.985 [2024-07-14 21:18:11.354951] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:18:59.985 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:18:59.985 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:18:59.985 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:18:59.985 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:18:59.985 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:18:59.985 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:18:59.985 NVMe Readv/Writev Request test 00:18:59.985 Attaching to 0000:00:10.0 00:18:59.985 Attached to 0000:00:10.0 00:18:59.985 0000:00:10.0: build_io_request_2 test passed 00:18:59.985 0000:00:10.0: build_io_request_4 test passed 00:18:59.985 0000:00:10.0: build_io_request_5 test passed 00:18:59.985 0000:00:10.0: build_io_request_6 test passed 00:18:59.985 0000:00:10.0: build_io_request_7 test passed 00:18:59.985 0000:00:10.0: build_io_request_10 test passed 00:18:59.985 Cleaning up... 00:18:59.985 00:18:59.985 real 0m0.631s 00:18:59.985 user 0m0.030s 00:18:59.985 sys 0m0.601s 00:18:59.985 21:18:11 nvme.nvme_sgl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:59.985 21:18:11 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:18:59.985 ************************************ 00:18:59.985 END TEST nvme_sgl 00:18:59.985 ************************************ 00:18:59.985 21:18:11 nvme -- common/autotest_common.sh@1142 -- # return 0 00:18:59.985 21:18:11 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:18:59.985 21:18:11 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:18:59.985 21:18:11 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:59.985 21:18:11 nvme -- common/autotest_common.sh@10 -- # set +x 00:18:59.985 ************************************ 00:18:59.985 START TEST nvme_e2edp 00:18:59.985 ************************************ 00:18:59.985 21:18:11 nvme.nvme_e2edp -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:19:00.553 EAL: TSC is not safe to use in SMP mode 00:19:00.553 EAL: TSC is not invariant 00:19:00.553 [2024-07-14 21:18:12.016804] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:00.553 NVMe Write/Read with End-to-End data protection test 00:19:00.553 Attaching to 0000:00:10.0 00:19:00.553 Attached to 0000:00:10.0 00:19:00.553 Cleaning up... 00:19:00.553 00:19:00.553 real 0m0.608s 00:19:00.553 user 0m0.015s 00:19:00.553 sys 0m0.592s 00:19:00.553 21:18:12 nvme.nvme_e2edp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:00.553 21:18:12 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:19:00.553 ************************************ 00:19:00.553 END TEST nvme_e2edp 00:19:00.553 ************************************ 00:19:00.812 21:18:12 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:00.812 21:18:12 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:00.812 21:18:12 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:00.812 21:18:12 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:00.812 21:18:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:00.812 ************************************ 00:19:00.812 START TEST nvme_reserve 00:19:00.812 ************************************ 00:19:00.813 21:18:12 nvme.nvme_reserve -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:19:01.380 EAL: TSC is not safe to use in SMP mode 00:19:01.380 EAL: TSC is not invariant 00:19:01.380 [2024-07-14 21:18:12.677356] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:01.380 ===================================================== 00:19:01.380 NVMe Controller at PCI bus 0, device 16, function 0 00:19:01.380 ===================================================== 00:19:01.380 Reservations: Not Supported 00:19:01.380 Reservation test passed 00:19:01.380 00:19:01.380 real 0m0.610s 00:19:01.380 user 0m0.021s 00:19:01.380 sys 0m0.589s 00:19:01.380 21:18:12 nvme.nvme_reserve -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:01.380 21:18:12 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:19:01.380 ************************************ 00:19:01.380 END TEST nvme_reserve 00:19:01.380 ************************************ 00:19:01.380 21:18:12 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:01.380 21:18:12 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:01.380 21:18:12 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:01.380 21:18:12 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:01.380 21:18:12 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:01.380 ************************************ 00:19:01.380 START TEST nvme_err_injection 00:19:01.380 ************************************ 00:19:01.380 21:18:12 nvme.nvme_err_injection -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:19:01.947 EAL: TSC is not safe to use in SMP mode 00:19:01.947 EAL: TSC is not invariant 00:19:01.947 [2024-07-14 21:18:13.324504] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:01.947 NVMe Error Injection test 00:19:01.947 Attaching to 0000:00:10.0 00:19:01.947 Attached to 0000:00:10.0 00:19:01.947 0000:00:10.0: get features failed as expected 00:19:01.947 0000:00:10.0: get features successfully as expected 00:19:01.947 0000:00:10.0: read failed as expected 00:19:01.947 0000:00:10.0: read successfully as expected 00:19:01.947 Cleaning up... 00:19:01.947 00:19:01.947 real 0m0.603s 00:19:01.947 user 0m0.032s 00:19:01.947 sys 0m0.570s 00:19:01.947 21:18:13 nvme.nvme_err_injection -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:01.947 21:18:13 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:19:01.947 ************************************ 00:19:01.947 END TEST nvme_err_injection 00:19:01.947 ************************************ 00:19:01.947 21:18:13 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:01.947 21:18:13 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:01.947 21:18:13 nvme -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:19:01.947 21:18:13 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:01.947 21:18:13 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:01.947 ************************************ 00:19:01.947 START TEST nvme_overhead 00:19:01.947 ************************************ 00:19:01.947 21:18:13 nvme.nvme_overhead -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:19:02.513 EAL: TSC is not safe to use in SMP mode 00:19:02.513 EAL: TSC is not invariant 00:19:02.513 [2024-07-14 21:18:13.974984] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:03.451 Initializing NVMe Controllers 00:19:03.451 Attaching to 0000:00:10.0 00:19:03.451 Attached to 0000:00:10.0 00:19:03.451 Initialization complete. Launching workers. 00:19:03.451 submit (in ns) avg, min, max = 10506.5, 7864.5, 127287.8 00:19:03.451 complete (in ns) avg, min, max = 7926.3, 5373.6, 103814.1 00:19:03.451 00:19:03.451 Submit histogram 00:19:03.451 ================ 00:19:03.451 Range in us Cumulative Count 00:19:03.451 7.855 - 7.913: 0.0323% ( 3) 00:19:03.451 7.913 - 7.971: 0.0645% ( 3) 00:19:03.451 7.971 - 8.029: 0.0861% ( 2) 00:19:03.451 8.029 - 8.087: 0.0968% ( 1) 00:19:03.451 8.087 - 8.145: 0.1291% ( 3) 00:19:03.451 8.145 - 8.204: 0.3442% ( 20) 00:19:03.451 8.204 - 8.262: 1.2156% ( 81) 00:19:03.451 8.262 - 8.320: 2.5065% ( 120) 00:19:03.451 8.320 - 8.378: 3.6037% ( 102) 00:19:03.451 8.378 - 8.436: 6.8847% ( 305) 00:19:03.451 8.436 - 8.495: 13.2100% ( 588) 00:19:03.451 8.495 - 8.553: 19.1050% ( 548) 00:19:03.451 8.553 - 8.611: 23.8059% ( 437) 00:19:03.451 8.611 - 8.669: 32.4441% ( 803) 00:19:03.451 8.669 - 8.727: 39.9096% ( 694) 00:19:03.451 8.727 - 8.785: 44.1588% ( 395) 00:19:03.451 8.785 - 8.844: 48.4187% ( 396) 00:19:03.451 8.844 - 8.902: 53.1089% ( 436) 00:19:03.451 8.902 - 8.960: 56.3791% ( 304) 00:19:03.451 8.960 - 9.018: 58.2509% ( 174) 00:19:03.451 9.018 - 9.076: 60.1979% ( 181) 00:19:03.451 9.076 - 9.135: 62.8120% ( 243) 00:19:03.451 9.135 - 9.193: 65.0925% ( 212) 00:19:03.451 9.193 - 9.251: 66.2543% ( 108) 00:19:03.451 9.251 - 9.309: 67.3515% ( 102) 00:19:03.451 9.309 - 9.367: 68.9651% ( 150) 00:19:03.451 9.367 - 9.425: 70.5572% ( 148) 00:19:03.451 9.425 - 9.484: 71.6007% ( 97) 00:19:03.451 9.484 - 9.542: 72.5904% ( 92) 00:19:03.451 9.542 - 9.600: 74.3868% ( 167) 00:19:03.451 9.600 - 9.658: 76.3231% ( 180) 00:19:03.451 9.658 - 9.716: 77.6355% ( 122) 00:19:03.451 9.716 - 9.775: 78.6360% ( 93) 00:19:03.451 9.775 - 9.833: 79.1201% ( 45) 00:19:03.451 9.833 - 9.891: 79.4858% ( 34) 00:19:03.451 9.891 - 9.949: 79.6687% ( 17) 00:19:03.451 9.949 - 10.007: 79.8623% ( 18) 00:19:03.451 10.007 - 10.065: 80.1635% ( 28) 00:19:03.451 10.065 - 10.124: 80.3034% ( 13) 00:19:03.451 10.124 - 10.182: 80.3787% ( 7) 00:19:03.451 10.182 - 10.240: 80.5077% ( 12) 00:19:03.451 10.240 - 10.298: 80.6046% ( 9) 00:19:03.451 10.298 - 10.356: 80.7121% ( 10) 00:19:03.451 10.356 - 10.415: 80.7659% ( 5) 00:19:03.451 10.415 - 10.473: 80.8090% ( 4) 00:19:03.451 10.473 - 10.531: 80.8735% ( 6) 00:19:03.451 10.531 - 10.589: 80.9596% ( 8) 00:19:03.451 10.589 - 10.647: 81.0026% ( 4) 00:19:03.451 10.647 - 10.705: 81.0241% ( 2) 00:19:03.451 10.705 - 10.764: 81.0886% ( 6) 00:19:03.451 10.764 - 10.822: 81.1317% ( 4) 00:19:03.451 10.822 - 10.880: 81.1639% ( 3) 00:19:03.451 10.880 - 10.938: 81.1747% ( 1) 00:19:03.451 10.938 - 10.996: 81.2177% ( 4) 00:19:03.451 10.996 - 11.055: 81.2285% ( 1) 00:19:03.451 11.055 - 11.113: 81.2715% ( 4) 00:19:03.451 11.113 - 11.171: 81.2930% ( 2) 00:19:03.451 11.171 - 11.229: 81.3038% ( 1) 00:19:03.451 11.229 - 11.287: 81.3253% ( 2) 00:19:03.451 11.287 - 11.345: 81.3683% ( 4) 00:19:03.451 11.345 - 11.404: 81.4114% ( 4) 00:19:03.451 11.404 - 11.462: 81.4436% ( 3) 00:19:03.451 11.462 - 11.520: 81.5082% ( 6) 00:19:03.451 11.520 - 11.578: 81.5835% ( 7) 00:19:03.451 11.578 - 11.636: 81.6695% ( 8) 00:19:03.451 11.636 - 11.695: 81.7448% ( 7) 00:19:03.451 11.695 - 11.753: 81.8739% ( 12) 00:19:03.451 11.753 - 11.811: 82.0460% ( 16) 00:19:03.451 11.811 - 11.869: 82.1644% ( 11) 00:19:03.451 11.869 - 11.927: 82.3580% ( 18) 00:19:03.451 11.927 - 11.985: 82.4978% ( 13) 00:19:03.451 11.985 - 12.044: 82.7022% ( 19) 00:19:03.451 12.044 - 12.102: 82.8959% ( 18) 00:19:03.451 12.102 - 12.160: 83.1971% ( 28) 00:19:03.451 12.160 - 12.218: 83.5413% ( 32) 00:19:03.451 12.218 - 12.276: 83.7457% ( 19) 00:19:03.451 12.276 - 12.335: 84.0039% ( 24) 00:19:03.451 12.335 - 12.393: 84.3266% ( 30) 00:19:03.451 12.393 - 12.451: 84.7569% ( 40) 00:19:03.451 12.451 - 12.509: 85.0581% ( 28) 00:19:03.451 12.509 - 12.567: 85.4238% ( 34) 00:19:03.451 12.567 - 12.625: 85.8003% ( 35) 00:19:03.451 12.625 - 12.684: 86.1876% ( 36) 00:19:03.451 12.684 - 12.742: 86.5856% ( 37) 00:19:03.451 12.742 - 12.800: 86.9836% ( 37) 00:19:03.451 12.800 - 12.858: 87.3171% ( 31) 00:19:03.451 12.858 - 12.916: 87.7259% ( 38) 00:19:03.451 12.916 - 12.975: 87.9948% ( 25) 00:19:03.451 12.975 - 13.033: 88.2745% ( 26) 00:19:03.451 13.033 - 13.091: 88.5757% ( 28) 00:19:03.451 13.091 - 13.149: 88.8662% ( 27) 00:19:03.451 13.149 - 13.207: 89.1781% ( 29) 00:19:03.451 13.207 - 13.265: 89.3933% ( 20) 00:19:03.451 13.265 - 13.324: 89.5869% ( 18) 00:19:03.451 13.324 - 13.382: 89.8021% ( 20) 00:19:03.451 13.382 - 13.440: 89.9957% ( 18) 00:19:03.451 13.440 - 13.498: 90.1463% ( 14) 00:19:03.451 13.498 - 13.556: 90.3077% ( 15) 00:19:03.451 13.556 - 13.615: 90.4798% ( 16) 00:19:03.451 13.615 - 13.673: 90.5873% ( 10) 00:19:03.451 13.673 - 13.731: 90.7595% ( 16) 00:19:03.451 13.731 - 13.789: 90.8670% ( 10) 00:19:03.452 13.789 - 13.847: 90.9854% ( 11) 00:19:03.452 13.847 - 13.905: 91.0607% ( 7) 00:19:03.452 13.905 - 13.964: 91.1360% ( 7) 00:19:03.452 13.964 - 14.022: 91.2220% ( 8) 00:19:03.452 14.022 - 14.080: 91.3404% ( 11) 00:19:03.452 14.080 - 14.138: 91.3834% ( 4) 00:19:03.452 14.138 - 14.196: 91.4479% ( 6) 00:19:03.452 14.196 - 14.255: 91.6093% ( 15) 00:19:03.452 14.255 - 14.313: 91.6738% ( 6) 00:19:03.452 14.313 - 14.371: 91.7707% ( 9) 00:19:03.452 14.371 - 14.429: 91.8890% ( 11) 00:19:03.452 14.429 - 14.487: 91.9643% ( 7) 00:19:03.452 14.487 - 14.545: 92.0611% ( 9) 00:19:03.452 14.545 - 14.604: 92.2225% ( 15) 00:19:03.452 14.604 - 14.662: 92.2655% ( 4) 00:19:03.452 14.662 - 14.720: 92.3408% ( 7) 00:19:03.452 14.720 - 14.778: 92.3946% ( 5) 00:19:03.452 14.778 - 14.836: 92.4269% ( 3) 00:19:03.452 14.836 - 14.895: 92.5022% ( 7) 00:19:03.452 14.895 - 15.011: 92.5775% ( 7) 00:19:03.452 15.011 - 15.127: 92.7711% ( 18) 00:19:03.452 15.127 - 15.244: 92.8571% ( 8) 00:19:03.452 15.244 - 15.360: 92.9647% ( 10) 00:19:03.452 15.360 - 15.476: 93.0400% ( 7) 00:19:03.452 15.476 - 15.593: 93.1046% ( 6) 00:19:03.452 15.593 - 15.709: 93.1261% ( 2) 00:19:03.452 15.709 - 15.825: 93.1799% ( 5) 00:19:03.452 15.825 - 15.942: 93.1906% ( 1) 00:19:03.452 15.942 - 16.058: 93.2552% ( 6) 00:19:03.452 16.058 - 16.175: 93.2874% ( 3) 00:19:03.452 16.291 - 16.407: 93.3412% ( 5) 00:19:03.452 16.407 - 16.524: 93.3520% ( 1) 00:19:03.452 16.524 - 16.640: 93.3627% ( 1) 00:19:03.452 16.640 - 16.756: 93.3735% ( 1) 00:19:03.452 16.756 - 16.873: 93.3950% ( 2) 00:19:03.452 16.873 - 16.989: 93.4058% ( 1) 00:19:03.452 16.989 - 17.105: 93.4165% ( 1) 00:19:03.452 17.105 - 17.222: 93.4380% ( 2) 00:19:03.452 17.222 - 17.338: 93.4488% ( 1) 00:19:03.452 17.338 - 17.455: 93.4596% ( 1) 00:19:03.452 17.804 - 17.920: 93.4703% ( 1) 00:19:03.452 17.920 - 18.036: 93.4811% ( 1) 00:19:03.452 18.036 - 18.153: 93.4918% ( 1) 00:19:03.452 18.153 - 18.269: 93.5349% ( 4) 00:19:03.452 18.502 - 18.618: 93.5456% ( 1) 00:19:03.452 18.735 - 18.851: 93.5779% ( 3) 00:19:03.452 18.967 - 19.084: 93.5994% ( 2) 00:19:03.452 19.200 - 19.316: 93.6102% ( 1) 00:19:03.452 19.549 - 19.665: 93.6209% ( 1) 00:19:03.452 19.782 - 19.898: 93.6532% ( 3) 00:19:03.452 20.015 - 20.131: 93.6639% ( 1) 00:19:03.452 20.131 - 20.247: 93.6855% ( 2) 00:19:03.452 20.596 - 20.713: 93.7070% ( 2) 00:19:03.452 20.829 - 20.945: 93.7177% ( 1) 00:19:03.452 21.062 - 21.178: 93.7392% ( 2) 00:19:03.452 21.178 - 21.295: 93.7608% ( 2) 00:19:03.452 21.527 - 21.644: 93.7715% ( 1) 00:19:03.452 22.109 - 22.225: 93.8038% ( 3) 00:19:03.452 22.225 - 22.342: 93.8253% ( 2) 00:19:03.452 22.342 - 22.458: 93.8576% ( 3) 00:19:03.452 22.458 - 22.575: 93.9544% ( 9) 00:19:03.452 22.575 - 22.691: 94.1480% ( 18) 00:19:03.452 22.691 - 22.807: 94.2771% ( 12) 00:19:03.452 22.807 - 22.924: 94.4815% ( 19) 00:19:03.452 22.924 - 23.040: 94.5891% ( 10) 00:19:03.452 23.040 - 23.156: 94.6859% ( 9) 00:19:03.452 23.156 - 23.273: 94.7935% ( 10) 00:19:03.452 23.273 - 23.389: 94.8795% ( 8) 00:19:03.452 23.389 - 23.505: 95.0194% ( 13) 00:19:03.452 23.505 - 23.622: 95.1915% ( 16) 00:19:03.452 23.622 - 23.738: 95.4389% ( 23) 00:19:03.452 23.738 - 23.855: 95.7293% ( 27) 00:19:03.452 23.855 - 23.971: 96.0521% ( 30) 00:19:03.452 23.971 - 24.087: 96.3533% ( 28) 00:19:03.452 24.087 - 24.204: 96.4393% ( 8) 00:19:03.452 24.204 - 24.320: 96.6222% ( 17) 00:19:03.452 24.320 - 24.436: 96.8266% ( 19) 00:19:03.452 24.436 - 24.553: 96.9987% ( 16) 00:19:03.452 24.553 - 24.669: 97.1386% ( 13) 00:19:03.452 24.669 - 24.785: 97.2139% ( 7) 00:19:03.452 24.785 - 24.902: 97.3322% ( 11) 00:19:03.452 24.902 - 25.018: 97.3752% ( 4) 00:19:03.452 25.018 - 25.135: 97.4613% ( 8) 00:19:03.452 25.135 - 25.251: 97.5366% ( 7) 00:19:03.452 25.251 - 25.367: 97.5904% ( 5) 00:19:03.452 25.367 - 25.484: 97.6119% ( 2) 00:19:03.452 25.484 - 25.600: 97.6764% ( 6) 00:19:03.452 25.600 - 25.716: 97.7517% ( 7) 00:19:03.452 25.716 - 25.833: 97.7732% ( 2) 00:19:03.452 25.833 - 25.949: 97.8808% ( 10) 00:19:03.452 25.949 - 26.065: 97.9669% ( 8) 00:19:03.452 26.065 - 26.182: 98.0637% ( 9) 00:19:03.452 26.182 - 26.298: 98.1067% ( 4) 00:19:03.452 26.298 - 26.415: 98.1497% ( 4) 00:19:03.452 26.415 - 26.531: 98.2143% ( 6) 00:19:03.452 26.531 - 26.647: 98.3326% ( 11) 00:19:03.452 26.647 - 26.764: 98.4079% ( 7) 00:19:03.452 26.764 - 26.880: 98.4725% ( 6) 00:19:03.452 26.880 - 26.996: 98.5370% ( 6) 00:19:03.452 26.996 - 27.113: 98.5800% ( 4) 00:19:03.452 27.113 - 27.229: 98.6661% ( 8) 00:19:03.452 27.229 - 27.345: 98.7522% ( 8) 00:19:03.452 27.345 - 27.462: 98.7844% ( 3) 00:19:03.452 27.462 - 27.578: 98.8705% ( 8) 00:19:03.452 27.578 - 27.695: 98.9028% ( 3) 00:19:03.452 27.695 - 27.811: 98.9673% ( 6) 00:19:03.452 27.811 - 27.927: 99.0103% ( 4) 00:19:03.452 27.927 - 28.044: 99.0426% ( 3) 00:19:03.452 28.044 - 28.160: 99.1394% ( 9) 00:19:03.452 28.160 - 28.276: 99.1824% ( 4) 00:19:03.452 28.276 - 28.393: 99.2147% ( 3) 00:19:03.452 28.393 - 28.509: 99.2577% ( 4) 00:19:03.452 28.509 - 28.625: 99.3008% ( 4) 00:19:03.452 28.625 - 28.742: 99.3438% ( 4) 00:19:03.452 28.742 - 28.858: 99.3976% ( 5) 00:19:03.452 28.858 - 28.975: 99.4299% ( 3) 00:19:03.452 29.091 - 29.207: 99.4406% ( 1) 00:19:03.452 29.207 - 29.324: 99.4836% ( 4) 00:19:03.452 29.324 - 29.440: 99.5267% ( 4) 00:19:03.452 29.440 - 29.556: 99.5482% ( 2) 00:19:03.452 29.556 - 29.673: 99.5697% ( 2) 00:19:03.452 29.673 - 29.789: 99.5805% ( 1) 00:19:03.452 29.789 - 30.022: 99.5912% ( 1) 00:19:03.452 30.022 - 30.255: 99.6235% ( 3) 00:19:03.452 30.255 - 30.487: 99.6558% ( 3) 00:19:03.452 30.487 - 30.720: 99.6665% ( 1) 00:19:03.452 30.953 - 31.185: 99.6773% ( 1) 00:19:03.452 31.185 - 31.418: 99.6880% ( 1) 00:19:03.452 31.651 - 31.884: 99.6988% ( 1) 00:19:03.452 32.116 - 32.349: 99.7203% ( 2) 00:19:03.452 32.349 - 32.582: 99.7311% ( 1) 00:19:03.452 32.582 - 32.815: 99.7418% ( 1) 00:19:03.452 36.538 - 36.771: 99.7526% ( 1) 00:19:03.452 38.633 - 38.865: 99.7741% ( 2) 00:19:03.452 39.331 - 39.564: 99.7849% ( 1) 00:19:03.452 39.564 - 39.796: 99.8064% ( 2) 00:19:03.452 40.262 - 40.495: 99.8171% ( 1) 00:19:03.452 40.727 - 40.960: 99.8279% ( 1) 00:19:03.452 41.193 - 41.425: 99.8386% ( 1) 00:19:03.452 41.891 - 42.124: 99.8494% ( 1) 00:19:03.452 43.520 - 43.753: 99.8602% ( 1) 00:19:03.453 44.218 - 44.451: 99.8709% ( 1) 00:19:03.453 44.916 - 45.149: 99.8817% ( 1) 00:19:03.453 47.244 - 47.476: 99.9032% ( 2) 00:19:03.453 47.709 - 47.942: 99.9139% ( 1) 00:19:03.453 56.087 - 56.320: 99.9247% ( 1) 00:19:03.453 56.785 - 57.018: 99.9355% ( 1) 00:19:03.453 58.880 - 59.113: 99.9462% ( 1) 00:19:03.453 60.509 - 60.975: 99.9570% ( 1) 00:19:03.453 70.284 - 70.749: 99.9677% ( 1) 00:19:03.453 76.335 - 76.800: 99.9785% ( 1) 00:19:03.453 101.469 - 101.935: 99.9892% ( 1) 00:19:03.453 126.604 - 127.535: 100.0000% ( 1) 00:19:03.453 00:19:03.453 Complete histogram 00:19:03.453 ================== 00:19:03.453 Range in us Cumulative Count 00:19:03.453 5.353 - 5.382: 0.0215% ( 2) 00:19:03.453 5.382 - 5.411: 0.0538% ( 3) 00:19:03.453 5.411 - 5.440: 0.2474% ( 18) 00:19:03.453 5.440 - 5.469: 0.3873% ( 13) 00:19:03.453 5.469 - 5.498: 0.6562% ( 25) 00:19:03.453 5.498 - 5.527: 0.7853% ( 12) 00:19:03.453 5.527 - 5.556: 1.0327% ( 23) 00:19:03.453 5.556 - 5.585: 1.9040% ( 81) 00:19:03.453 5.585 - 5.615: 3.0443% ( 106) 00:19:03.453 5.615 - 5.644: 4.3244% ( 119) 00:19:03.453 5.644 - 5.673: 5.0990% ( 72) 00:19:03.453 5.673 - 5.702: 6.2823% ( 110) 00:19:03.453 5.702 - 5.731: 8.1325% ( 172) 00:19:03.453 5.731 - 5.760: 9.4234% ( 120) 00:19:03.453 5.760 - 5.789: 10.5529% ( 105) 00:19:03.453 5.789 - 5.818: 11.2199% ( 62) 00:19:03.453 5.818 - 5.847: 12.1343% ( 85) 00:19:03.453 5.847 - 5.876: 14.1997% ( 192) 00:19:03.453 5.876 - 5.905: 18.0077% ( 354) 00:19:03.453 5.905 - 5.935: 22.3429% ( 403) 00:19:03.453 5.935 - 5.964: 25.7853% ( 320) 00:19:03.453 5.964 - 5.993: 26.9363% ( 107) 00:19:03.453 5.993 - 6.022: 28.0551% ( 104) 00:19:03.453 6.022 - 6.051: 30.9488% ( 269) 00:19:03.453 6.051 - 6.080: 35.6175% ( 434) 00:19:03.453 6.080 - 6.109: 42.3515% ( 626) 00:19:03.453 6.109 - 6.138: 46.4931% ( 385) 00:19:03.453 6.138 - 6.167: 47.5043% ( 94) 00:19:03.453 6.167 - 6.196: 48.2573% ( 70) 00:19:03.453 6.196 - 6.225: 49.7526% ( 139) 00:19:03.453 6.225 - 6.255: 52.7324% ( 277) 00:19:03.453 6.255 - 6.284: 56.4006% ( 341) 00:19:03.453 6.284 - 6.313: 59.1330% ( 254) 00:19:03.453 6.313 - 6.342: 60.1011% ( 90) 00:19:03.453 6.342 - 6.371: 60.5529% ( 42) 00:19:03.453 6.371 - 6.400: 61.4028% ( 79) 00:19:03.453 6.400 - 6.429: 63.4359% ( 189) 00:19:03.453 6.429 - 6.458: 65.6842% ( 209) 00:19:03.453 6.458 - 6.487: 67.9217% ( 208) 00:19:03.453 6.487 - 6.516: 69.6321% ( 159) 00:19:03.453 6.516 - 6.545: 70.5572% ( 86) 00:19:03.453 6.545 - 6.575: 71.1704% ( 57) 00:19:03.453 6.575 - 6.604: 71.9342% ( 71) 00:19:03.453 6.604 - 6.633: 73.1067% ( 109) 00:19:03.453 6.633 - 6.662: 74.0641% ( 89) 00:19:03.453 6.662 - 6.691: 74.6988% ( 59) 00:19:03.453 6.691 - 6.720: 75.2474% ( 51) 00:19:03.453 6.720 - 6.749: 75.4948% ( 23) 00:19:03.453 6.749 - 6.778: 75.7207% ( 21) 00:19:03.453 6.778 - 6.807: 75.9574% ( 22) 00:19:03.453 6.807 - 6.836: 76.3231% ( 34) 00:19:03.453 6.836 - 6.865: 76.7104% ( 36) 00:19:03.453 6.865 - 6.895: 77.1299% ( 39) 00:19:03.453 6.895 - 6.924: 77.4096% ( 26) 00:19:03.453 6.924 - 6.953: 77.5495% ( 13) 00:19:03.453 6.953 - 6.982: 77.8077% ( 24) 00:19:03.453 6.982 - 7.011: 77.9583% ( 14) 00:19:03.453 7.011 - 7.040: 78.0336% ( 7) 00:19:03.453 7.040 - 7.069: 78.1304% ( 9) 00:19:03.453 7.069 - 7.098: 78.2487% ( 11) 00:19:03.453 7.098 - 7.127: 78.3240% ( 7) 00:19:03.453 7.127 - 7.156: 78.3993% ( 7) 00:19:03.453 7.156 - 7.185: 78.4746% ( 7) 00:19:03.453 7.185 - 7.215: 78.5392% ( 6) 00:19:03.453 7.215 - 7.244: 78.6252% ( 8) 00:19:03.453 7.244 - 7.273: 78.6790% ( 5) 00:19:03.453 7.273 - 7.302: 78.7005% ( 2) 00:19:03.453 7.302 - 7.331: 78.7328% ( 3) 00:19:03.453 7.331 - 7.360: 78.7866% ( 5) 00:19:03.453 7.360 - 7.389: 78.7973% ( 1) 00:19:03.453 7.389 - 7.418: 78.8188% ( 2) 00:19:03.453 7.418 - 7.447: 78.8619% ( 4) 00:19:03.453 7.447 - 7.505: 78.9264% ( 6) 00:19:03.453 7.505 - 7.564: 79.0017% ( 7) 00:19:03.453 7.564 - 7.622: 79.0232% ( 2) 00:19:03.453 7.622 - 7.680: 79.0555% ( 3) 00:19:03.453 7.680 - 7.738: 79.1416% ( 8) 00:19:03.453 7.738 - 7.796: 79.1846% ( 4) 00:19:03.453 7.796 - 7.855: 79.1954% ( 1) 00:19:03.453 7.855 - 7.913: 79.2814% ( 8) 00:19:03.453 7.913 - 7.971: 79.3890% ( 10) 00:19:03.453 7.971 - 8.029: 79.4535% ( 6) 00:19:03.453 8.029 - 8.087: 79.5611% ( 10) 00:19:03.453 8.087 - 8.145: 79.5934% ( 3) 00:19:03.453 8.145 - 8.204: 79.6902% ( 9) 00:19:03.453 8.204 - 8.262: 79.7870% ( 9) 00:19:03.453 8.262 - 8.320: 79.8408% ( 5) 00:19:03.453 8.320 - 8.378: 79.8946% ( 5) 00:19:03.453 8.378 - 8.436: 80.0022% ( 10) 00:19:03.453 8.436 - 8.495: 80.0559% ( 5) 00:19:03.453 8.495 - 8.553: 80.1205% ( 6) 00:19:03.453 8.553 - 8.611: 80.2496% ( 12) 00:19:03.453 8.611 - 8.669: 80.3464% ( 9) 00:19:03.453 8.669 - 8.727: 80.4540% ( 10) 00:19:03.453 8.727 - 8.785: 80.7014% ( 23) 00:19:03.453 8.785 - 8.844: 80.8520% ( 14) 00:19:03.453 8.844 - 8.902: 80.9811% ( 12) 00:19:03.453 8.902 - 8.960: 81.1209% ( 13) 00:19:03.453 8.960 - 9.018: 81.2500% ( 12) 00:19:03.453 9.018 - 9.076: 81.3468% ( 9) 00:19:03.453 9.076 - 9.135: 81.5189% ( 16) 00:19:03.453 9.135 - 9.193: 81.6480% ( 12) 00:19:03.453 9.193 - 9.251: 81.8094% ( 15) 00:19:03.453 9.251 - 9.309: 81.9600% ( 14) 00:19:03.453 9.309 - 9.367: 82.1321% ( 16) 00:19:03.453 9.367 - 9.425: 82.3365% ( 19) 00:19:03.453 9.425 - 9.484: 82.5947% ( 24) 00:19:03.453 9.484 - 9.542: 82.8959% ( 28) 00:19:03.453 9.542 - 9.600: 83.1540% ( 24) 00:19:03.453 9.600 - 9.658: 83.3584% ( 19) 00:19:03.453 9.658 - 9.716: 83.7027% ( 32) 00:19:03.453 9.716 - 9.775: 83.9716% ( 25) 00:19:03.453 9.775 - 9.833: 84.3051% ( 31) 00:19:03.453 9.833 - 9.891: 84.6063% ( 28) 00:19:03.453 9.891 - 9.949: 84.7999% ( 18) 00:19:03.453 9.949 - 10.007: 85.0258% ( 21) 00:19:03.453 10.007 - 10.065: 85.3378% ( 29) 00:19:03.453 10.065 - 10.124: 85.5637% ( 21) 00:19:03.453 10.124 - 10.182: 85.7788% ( 20) 00:19:03.453 10.182 - 10.240: 86.0908% ( 29) 00:19:03.453 10.240 - 10.298: 86.3275% ( 22) 00:19:03.453 10.298 - 10.356: 86.6394% ( 29) 00:19:03.453 10.356 - 10.415: 86.8223% ( 17) 00:19:03.453 10.415 - 10.473: 87.1665% ( 32) 00:19:03.453 10.473 - 10.531: 87.4139% ( 23) 00:19:03.453 10.531 - 10.589: 87.5645% ( 14) 00:19:03.453 10.589 - 10.647: 87.8442% ( 26) 00:19:03.453 10.647 - 10.705: 88.1024% ( 24) 00:19:03.453 10.705 - 10.764: 88.3176% ( 20) 00:19:03.453 10.764 - 10.822: 88.5865% ( 25) 00:19:03.453 10.822 - 10.880: 88.8339% ( 23) 00:19:03.453 10.880 - 10.938: 89.0168% ( 17) 00:19:03.453 10.938 - 10.996: 89.1889% ( 16) 00:19:03.453 10.996 - 11.055: 89.4148% ( 21) 00:19:03.453 11.055 - 11.113: 89.6730% ( 24) 00:19:03.453 11.113 - 11.171: 89.8128% ( 13) 00:19:03.453 11.171 - 11.229: 89.9634% ( 14) 00:19:03.453 11.229 - 11.287: 90.2646% ( 28) 00:19:03.453 11.287 - 11.345: 90.4045% ( 13) 00:19:03.453 11.345 - 11.404: 90.5120% ( 10) 00:19:03.453 11.404 - 11.462: 90.5981% ( 8) 00:19:03.453 11.462 - 11.520: 90.6734% ( 7) 00:19:03.453 11.520 - 11.578: 90.7702% ( 9) 00:19:03.453 11.578 - 11.636: 90.8778% ( 10) 00:19:03.453 11.636 - 11.695: 90.9854% ( 10) 00:19:03.453 11.695 - 11.753: 91.0929% ( 10) 00:19:03.453 11.753 - 11.811: 91.1467% ( 5) 00:19:03.453 11.811 - 11.869: 91.2651% ( 11) 00:19:03.453 11.869 - 11.927: 91.3404% ( 7) 00:19:03.453 11.927 - 11.985: 91.3834% ( 4) 00:19:03.453 11.985 - 12.044: 91.4049% ( 2) 00:19:03.453 12.044 - 12.102: 91.4694% ( 6) 00:19:03.453 12.102 - 12.160: 91.5340% ( 6) 00:19:03.453 12.160 - 12.218: 91.5770% ( 4) 00:19:03.453 12.218 - 12.276: 91.6093% ( 3) 00:19:03.453 12.276 - 12.335: 91.6416% ( 3) 00:19:03.453 12.335 - 12.393: 91.6523% ( 1) 00:19:03.453 12.393 - 12.451: 91.7276% ( 7) 00:19:03.453 12.451 - 12.509: 91.7491% ( 2) 00:19:03.453 12.509 - 12.567: 91.7814% ( 3) 00:19:03.453 12.567 - 12.625: 91.7922% ( 1) 00:19:03.453 12.625 - 12.684: 91.8352% ( 4) 00:19:03.453 12.684 - 12.742: 91.8460% ( 1) 00:19:03.453 12.742 - 12.800: 91.8675% ( 2) 00:19:03.453 12.800 - 12.858: 91.8997% ( 3) 00:19:03.453 12.858 - 12.916: 91.9320% ( 3) 00:19:03.453 12.916 - 12.975: 91.9858% ( 5) 00:19:03.453 12.975 - 13.033: 92.0181% ( 3) 00:19:03.453 13.033 - 13.091: 92.0396% ( 2) 00:19:03.453 13.091 - 13.149: 92.0719% ( 3) 00:19:03.453 13.149 - 13.207: 92.0826% ( 1) 00:19:03.453 13.207 - 13.265: 92.0934% ( 1) 00:19:03.453 13.324 - 13.382: 92.1149% ( 2) 00:19:03.453 13.382 - 13.440: 92.1256% ( 1) 00:19:03.453 13.440 - 13.498: 92.1364% ( 1) 00:19:03.454 13.498 - 13.556: 92.1472% ( 1) 00:19:03.454 13.673 - 13.731: 92.1902% ( 4) 00:19:03.454 13.731 - 13.789: 92.2009% ( 1) 00:19:03.454 13.789 - 13.847: 92.2117% ( 1) 00:19:03.454 13.847 - 13.905: 92.2225% ( 1) 00:19:03.454 14.022 - 14.080: 92.2440% ( 2) 00:19:03.454 14.080 - 14.138: 92.2547% ( 1) 00:19:03.454 14.138 - 14.196: 92.2655% ( 1) 00:19:03.454 14.196 - 14.255: 92.2762% ( 1) 00:19:03.454 14.255 - 14.313: 92.2978% ( 2) 00:19:03.454 14.371 - 14.429: 92.3085% ( 1) 00:19:03.454 14.429 - 14.487: 92.3193% ( 1) 00:19:03.454 14.487 - 14.545: 92.3300% ( 1) 00:19:03.454 14.545 - 14.604: 92.3515% ( 2) 00:19:03.454 14.778 - 14.836: 92.3623% ( 1) 00:19:03.454 15.011 - 15.127: 92.3731% ( 1) 00:19:03.454 15.476 - 15.593: 92.4053% ( 3) 00:19:03.454 15.593 - 15.709: 92.4269% ( 2) 00:19:03.454 15.709 - 15.825: 92.4376% ( 1) 00:19:03.454 16.175 - 16.291: 92.4484% ( 1) 00:19:03.454 16.640 - 16.756: 92.4699% ( 2) 00:19:03.454 16.756 - 16.873: 92.4914% ( 2) 00:19:03.454 16.873 - 16.989: 92.5129% ( 2) 00:19:03.454 16.989 - 17.105: 92.5344% ( 2) 00:19:03.454 17.222 - 17.338: 92.5452% ( 1) 00:19:03.454 17.455 - 17.571: 92.5559% ( 1) 00:19:03.454 17.804 - 17.920: 92.5882% ( 3) 00:19:03.454 18.269 - 18.385: 92.5990% ( 1) 00:19:03.454 18.385 - 18.502: 92.6097% ( 1) 00:19:03.454 18.618 - 18.735: 92.6205% ( 1) 00:19:03.454 19.200 - 19.316: 92.6312% ( 1) 00:19:03.454 19.316 - 19.433: 92.6635% ( 3) 00:19:03.454 19.433 - 19.549: 92.7281% ( 6) 00:19:03.454 19.549 - 19.665: 92.8034% ( 7) 00:19:03.454 19.665 - 19.782: 92.9002% ( 9) 00:19:03.454 19.782 - 19.898: 93.0400% ( 13) 00:19:03.454 19.898 - 20.015: 93.2444% ( 19) 00:19:03.454 20.015 - 20.131: 93.4380% ( 18) 00:19:03.454 20.131 - 20.247: 93.6424% ( 19) 00:19:03.454 20.247 - 20.364: 93.8145% ( 16) 00:19:03.454 20.364 - 20.480: 93.9974% ( 17) 00:19:03.454 20.480 - 20.596: 94.1373% ( 13) 00:19:03.454 20.596 - 20.713: 94.3847% ( 23) 00:19:03.454 20.713 - 20.829: 94.7074% ( 30) 00:19:03.454 20.829 - 20.945: 95.0194% ( 29) 00:19:03.454 20.945 - 21.062: 95.4712% ( 42) 00:19:03.454 21.062 - 21.178: 95.9768% ( 47) 00:19:03.454 21.178 - 21.295: 96.2565% ( 26) 00:19:03.454 21.295 - 21.411: 96.4716% ( 20) 00:19:03.454 21.411 - 21.527: 96.7405% ( 25) 00:19:03.454 21.527 - 21.644: 96.9127% ( 16) 00:19:03.454 21.644 - 21.760: 97.0848% ( 16) 00:19:03.454 21.760 - 21.876: 97.2569% ( 16) 00:19:03.454 21.876 - 21.993: 97.4290% ( 16) 00:19:03.454 21.993 - 22.109: 97.5796% ( 14) 00:19:03.454 22.109 - 22.225: 97.7194% ( 13) 00:19:03.454 22.225 - 22.342: 97.8055% ( 8) 00:19:03.454 22.342 - 22.458: 97.8485% ( 4) 00:19:03.454 22.458 - 22.575: 97.8701% ( 2) 00:19:03.454 22.575 - 22.691: 97.9346% ( 6) 00:19:03.454 22.691 - 22.807: 97.9669% ( 3) 00:19:03.454 22.807 - 22.924: 98.0314% ( 6) 00:19:03.454 22.924 - 23.040: 98.0744% ( 4) 00:19:03.454 23.040 - 23.156: 98.1605% ( 8) 00:19:03.454 23.156 - 23.273: 98.2035% ( 4) 00:19:03.454 23.273 - 23.389: 98.2358% ( 3) 00:19:03.454 23.389 - 23.505: 98.2896% ( 5) 00:19:03.454 23.505 - 23.622: 98.3326% ( 4) 00:19:03.454 23.622 - 23.738: 98.3434% ( 1) 00:19:03.454 23.738 - 23.855: 98.3864% ( 4) 00:19:03.454 23.855 - 23.971: 98.4402% ( 5) 00:19:03.454 23.971 - 24.087: 98.5155% ( 7) 00:19:03.454 24.087 - 24.204: 98.5800% ( 6) 00:19:03.454 24.204 - 24.320: 98.6769% ( 9) 00:19:03.454 24.320 - 24.436: 98.7522% ( 7) 00:19:03.454 24.436 - 24.553: 98.7844% ( 3) 00:19:03.454 24.553 - 24.669: 98.8382% ( 5) 00:19:03.454 24.669 - 24.785: 98.8920% ( 5) 00:19:03.454 24.785 - 24.902: 98.9458% ( 5) 00:19:03.454 24.902 - 25.018: 99.0426% ( 9) 00:19:03.454 25.018 - 25.135: 99.0964% ( 5) 00:19:03.454 25.135 - 25.251: 99.1717% ( 7) 00:19:03.454 25.251 - 25.367: 99.2362% ( 6) 00:19:03.454 25.367 - 25.484: 99.3115% ( 7) 00:19:03.713 25.484 - 25.600: 99.3438% ( 3) 00:19:03.713 25.600 - 25.716: 99.3653% ( 2) 00:19:03.713 25.716 - 25.833: 99.3761% ( 1) 00:19:03.713 25.833 - 25.949: 99.3976% ( 2) 00:19:03.713 25.949 - 26.065: 99.4299% ( 3) 00:19:03.713 26.065 - 26.182: 99.4621% ( 3) 00:19:03.713 26.182 - 26.298: 99.5052% ( 4) 00:19:03.713 26.298 - 26.415: 99.5374% ( 3) 00:19:03.713 26.415 - 26.531: 99.5590% ( 2) 00:19:03.713 26.531 - 26.647: 99.5912% ( 3) 00:19:03.713 26.647 - 26.764: 99.6020% ( 1) 00:19:03.713 26.880 - 26.996: 99.6235% ( 2) 00:19:03.713 26.996 - 27.113: 99.6343% ( 1) 00:19:03.713 27.113 - 27.229: 99.6558% ( 2) 00:19:03.713 27.229 - 27.345: 99.6665% ( 1) 00:19:03.713 27.345 - 27.462: 99.6880% ( 2) 00:19:03.713 27.578 - 27.695: 99.6988% ( 1) 00:19:03.713 27.811 - 27.927: 99.7096% ( 1) 00:19:03.713 28.160 - 28.276: 99.7203% ( 1) 00:19:03.713 28.276 - 28.393: 99.7311% ( 1) 00:19:03.713 28.858 - 28.975: 99.7418% ( 1) 00:19:03.713 30.022 - 30.255: 99.7526% ( 1) 00:19:03.713 30.487 - 30.720: 99.7633% ( 1) 00:19:03.713 31.185 - 31.418: 99.7741% ( 1) 00:19:03.713 31.418 - 31.651: 99.7956% ( 2) 00:19:03.713 31.651 - 31.884: 99.8064% ( 1) 00:19:03.713 31.884 - 32.116: 99.8171% ( 1) 00:19:03.713 32.116 - 32.349: 99.8279% ( 1) 00:19:03.713 32.349 - 32.582: 99.8386% ( 1) 00:19:03.713 32.582 - 32.815: 99.8494% ( 1) 00:19:03.713 34.211 - 34.444: 99.8602% ( 1) 00:19:03.713 37.702 - 37.935: 99.8709% ( 1) 00:19:03.713 37.935 - 38.167: 99.8924% ( 2) 00:19:03.713 38.633 - 38.865: 99.9032% ( 1) 00:19:03.713 39.796 - 40.029: 99.9139% ( 1) 00:19:03.713 40.262 - 40.495: 99.9247% ( 1) 00:19:03.713 40.495 - 40.727: 99.9355% ( 1) 00:19:03.713 41.193 - 41.425: 99.9462% ( 1) 00:19:03.713 42.589 - 42.822: 99.9570% ( 1) 00:19:03.713 42.822 - 43.055: 99.9677% ( 1) 00:19:03.713 52.829 - 53.062: 99.9785% ( 1) 00:19:03.713 67.491 - 67.956: 99.9892% ( 1) 00:19:03.713 103.796 - 104.262: 100.0000% ( 1) 00:19:03.713 00:19:03.713 00:19:03.713 real 0m1.583s 00:19:03.713 user 0m1.006s 00:19:03.713 sys 0m0.573s 00:19:03.713 21:18:15 nvme.nvme_overhead -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:03.713 21:18:15 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:19:03.713 ************************************ 00:19:03.713 END TEST nvme_overhead 00:19:03.713 ************************************ 00:19:03.713 21:18:15 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:03.713 21:18:15 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:03.713 21:18:15 nvme -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:19:03.713 21:18:15 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:03.713 21:18:15 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:03.713 ************************************ 00:19:03.713 START TEST nvme_arbitration 00:19:03.713 ************************************ 00:19:03.713 21:18:15 nvme.nvme_arbitration -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:19:04.279 EAL: TSC is not safe to use in SMP mode 00:19:04.279 EAL: TSC is not invariant 00:19:04.279 [2024-07-14 21:18:15.577507] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:08.456 Initializing NVMe Controllers 00:19:08.456 Attaching to 0000:00:10.0 00:19:08.456 Attached to 0000:00:10.0 00:19:08.456 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:19:08.456 Associating QEMU NVMe Ctrl (12340 ) with lcore 1 00:19:08.456 Associating QEMU NVMe Ctrl (12340 ) with lcore 2 00:19:08.456 Associating QEMU NVMe Ctrl (12340 ) with lcore 3 00:19:08.456 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:19:08.456 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:19:08.456 Initialization complete. Launching workers. 00:19:08.456 Starting thread on core 1 with urgent priority queue 00:19:08.456 Starting thread on core 2 with urgent priority queue 00:19:08.456 Starting thread on core 3 with urgent priority queue 00:19:08.456 Starting thread on core 0 with urgent priority queue 00:19:08.456 QEMU NVMe Ctrl (12340 ) core 0: 6925.00 IO/s 14.44 secs/100000 ios 00:19:08.456 QEMU NVMe Ctrl (12340 ) core 1: 6932.00 IO/s 14.43 secs/100000 ios 00:19:08.456 QEMU NVMe Ctrl (12340 ) core 2: 6924.00 IO/s 14.44 secs/100000 ios 00:19:08.456 QEMU NVMe Ctrl (12340 ) core 3: 6886.00 IO/s 14.52 secs/100000 ios 00:19:08.456 ======================================================== 00:19:08.456 00:19:08.456 00:19:08.456 real 0m4.245s 00:19:08.456 user 0m12.735s 00:19:08.456 sys 0m0.534s 00:19:08.456 21:18:19 nvme.nvme_arbitration -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:08.456 21:18:19 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:19:08.456 ************************************ 00:19:08.456 END TEST nvme_arbitration 00:19:08.456 ************************************ 00:19:08.456 21:18:19 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:08.456 21:18:19 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:08.456 21:18:19 nvme -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:19:08.456 21:18:19 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.456 21:18:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:08.456 ************************************ 00:19:08.456 START TEST nvme_single_aen 00:19:08.456 ************************************ 00:19:08.456 21:18:19 nvme.nvme_single_aen -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:19:08.457 EAL: TSC is not safe to use in SMP mode 00:19:08.457 EAL: TSC is not invariant 00:19:08.457 [2024-07-14 21:18:19.870527] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:08.457 Asynchronous Event Request test 00:19:08.457 Attaching to 0000:00:10.0 00:19:08.457 Attached to 0000:00:10.0 00:19:08.457 Reset controller to setup AER completions for this process 00:19:08.457 Registering asynchronous event callbacks... 00:19:08.457 Getting orig temperature thresholds of all controllers 00:19:08.457 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:19:08.457 Setting all controllers temperature threshold low to trigger AER 00:19:08.457 Waiting for all controllers temperature threshold to be set lower 00:19:08.457 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:19:08.457 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:19:08.457 Waiting for all controllers to trigger AER and reset threshold 00:19:08.457 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:19:08.457 Cleaning up... 00:19:08.457 00:19:08.457 real 0m0.563s 00:19:08.457 user 0m0.012s 00:19:08.457 sys 0m0.551s 00:19:08.457 21:18:19 nvme.nvme_single_aen -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:08.457 21:18:19 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:19:08.457 ************************************ 00:19:08.457 END TEST nvme_single_aen 00:19:08.457 ************************************ 00:19:08.457 21:18:19 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:08.457 21:18:19 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:19:08.457 21:18:19 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:08.457 21:18:19 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:08.457 21:18:19 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:08.457 ************************************ 00:19:08.457 START TEST nvme_doorbell_aers 00:19:08.457 ************************************ 00:19:08.457 21:18:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1123 -- # nvme_doorbell_aers 00:19:08.457 21:18:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:19:08.457 21:18:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:19:08.457 21:18:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:19:08.457 21:18:19 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:19:08.457 21:18:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:08.457 21:18:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1513 -- # local bdfs 00:19:08.457 21:18:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:08.457 21:18:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:08.457 21:18:19 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:08.457 21:18:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:19:08.457 21:18:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:19:08.457 21:18:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:19:08.457 21:18:20 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:09.025 EAL: TSC is not safe to use in SMP mode 00:19:09.025 EAL: TSC is not invariant 00:19:09.025 [2024-07-14 21:18:20.513281] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:09.025 Executing: test_write_invalid_db 00:19:09.025 Waiting for AER completion... 00:19:09.025 Asynchronous Event received. 00:19:09.025 Error Informaton Log Page received. 00:19:09.025 Success: test_write_invalid_db 00:19:09.025 00:19:09.025 Executing: test_invalid_db_write_overflow_sq 00:19:09.025 Waiting for AER completion... 00:19:09.025 Asynchronous Event received. 00:19:09.025 Error Informaton Log Page received. 00:19:09.025 Success: test_invalid_db_write_overflow_sq 00:19:09.025 00:19:09.025 Executing: test_invalid_db_write_overflow_cq 00:19:09.025 Waiting for AER completion... 00:19:09.025 Asynchronous Event received. 00:19:09.025 Error Informaton Log Page received. 00:19:09.025 Success: test_invalid_db_write_overflow_cq 00:19:09.025 00:19:09.025 00:19:09.025 real 0m0.601s 00:19:09.025 user 0m0.042s 00:19:09.025 sys 0m0.571s 00:19:09.025 21:18:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:09.025 21:18:20 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:19:09.025 ************************************ 00:19:09.025 END TEST nvme_doorbell_aers 00:19:09.025 ************************************ 00:19:09.284 21:18:20 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:09.284 21:18:20 nvme -- nvme/nvme.sh@97 -- # uname 00:19:09.284 21:18:20 nvme -- nvme/nvme.sh@97 -- # '[' FreeBSD '!=' FreeBSD ']' 00:19:09.284 21:18:20 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:19:09.284 21:18:20 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:09.284 21:18:20 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:09.284 21:18:20 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:09.284 ************************************ 00:19:09.284 START TEST bdev_nvme_reset_stuck_adm_cmd 00:19:09.284 ************************************ 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:19:09.284 * Looking for test storage... 00:19:09.284 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # bdfs=() 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1524 -- # local bdfs 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1513 -- # local bdfs 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=68849 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 68849 00:19:09.284 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@829 -- # '[' -z 68849 ']' 00:19:09.285 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:09.285 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.285 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:09.285 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:09.285 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.285 21:18:20 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:09.285 [2024-07-14 21:18:20.787083] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:09.285 [2024-07-14 21:18:20.787334] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:09.851 EAL: TSC is not safe to use in SMP mode 00:19:09.851 EAL: TSC is not invariant 00:19:10.109 [2024-07-14 21:18:21.411849] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:10.109 [2024-07-14 21:18:21.495237] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:10.109 [2024-07-14 21:18:21.495300] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:10.109 [2024-07-14 21:18:21.495323] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 2]. 00:19:10.109 [2024-07-14 21:18:21.495330] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 3]. 00:19:10.109 [2024-07-14 21:18:21.499398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:10.109 [2024-07-14 21:18:21.499195] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:10.109 [2024-07-14 21:18:21.499287] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.109 [2024-07-14 21:18:21.499396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@862 -- # return 0 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:10.367 [2024-07-14 21:18:21.753023] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:10.367 nvme0n1 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_XXXXX.txt 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:10.367 true 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1720991901 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=68861 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:19:10.367 21:18:21 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:12.901 [2024-07-14 21:18:23.902489] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0] resetting controller 00:19:12.901 [2024-07-14 21:18:23.904338] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:19:12.901 [2024-07-14 21:18:23.904369] nvme_qpair.c: 215:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:19:12.901 [2024-07-14 21:18:23.904380] nvme_qpair.c: 477:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:19:12.901 [2024-07-14 21:18:23.905496] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.901 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 68861 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 68861 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 68861 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_XXXXX.txt 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.lORFmA 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /tmp//sh-np.GR48D6 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_XXXXX.txt 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 68849 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@948 -- # '[' -z 68849 ']' 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@952 -- # kill -0 68849 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # uname 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # ps -c -o command 68849 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # tail -1 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:19:12.901 killing process with pid 68849 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@966 -- # echo 'killing process with pid 68849' 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@967 -- # kill 68849 00:19:12.901 21:18:23 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # wait 68849 00:19:12.901 21:18:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:19:12.901 21:18:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:19:12.901 00:19:12.901 real 0m3.618s 00:19:12.901 user 0m11.477s 00:19:12.901 sys 0m0.853s 00:19:12.901 21:18:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:12.901 ************************************ 00:19:12.901 END TEST bdev_nvme_reset_stuck_adm_cmd 00:19:12.901 ************************************ 00:19:12.901 21:18:24 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:19:12.901 21:18:24 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:12.901 21:18:24 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:19:12.901 21:18:24 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:19:12.901 21:18:24 nvme -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:12.901 21:18:24 nvme -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:12.901 21:18:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:12.901 ************************************ 00:19:12.901 START TEST nvme_fio 00:19:12.901 ************************************ 00:19:12.901 21:18:24 nvme.nvme_fio -- common/autotest_common.sh@1123 -- # nvme_fio_test 00:19:12.901 21:18:24 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:19:12.901 21:18:24 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:19:12.901 21:18:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:19:12.901 21:18:24 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:12.901 21:18:24 nvme.nvme_fio -- common/autotest_common.sh@1513 -- # local bdfs 00:19:12.901 21:18:24 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:12.901 21:18:24 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:12.901 21:18:24 nvme.nvme_fio -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:12.901 21:18:24 nvme.nvme_fio -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:19:12.901 21:18:24 nvme.nvme_fio -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:19:12.901 21:18:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0') 00:19:12.901 21:18:24 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:19:12.901 21:18:24 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:19:12.901 21:18:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:12.901 21:18:24 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:19:13.468 EAL: TSC is not safe to use in SMP mode 00:19:13.468 EAL: TSC is not invariant 00:19:13.468 [2024-07-14 21:18:24.847397] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:13.468 21:18:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:19:13.468 21:18:24 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:19:14.035 EAL: TSC is not safe to use in SMP mode 00:19:14.035 EAL: TSC is not invariant 00:19:14.035 [2024-07-14 21:18:25.427864] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:14.035 21:18:25 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:19:14.035 21:18:25 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1339 -- # local sanitizers 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1340 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # shift 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local asan_lib= 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libasan 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # asan_lib= 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:19:14.035 21:18:25 nvme.nvme_fio -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:19:14.035 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:19:14.035 fio-3.35 00:19:14.293 Starting 1 thread 00:19:14.551 EAL: TSC is not safe to use in SMP mode 00:19:14.551 EAL: TSC is not invariant 00:19:14.809 [2024-07-14 21:18:26.099869] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:17.339 00:19:17.339 test: (groupid=0, jobs=1): err= 0: pid=101529: Sun Jul 14 21:18:28 2024 00:19:17.339 read: IOPS=39.6k, BW=155MiB/s (162MB/s)(310MiB/2001msec) 00:19:17.339 slat (nsec): min=353, max=49023, avg=646.13, stdev=1294.89 00:19:17.339 clat (usec): min=267, max=4563, avg=1618.45, stdev=470.39 00:19:17.339 lat (usec): min=268, max=4565, avg=1619.10, stdev=470.35 00:19:17.339 clat percentiles (usec): 00:19:17.340 | 1.00th=[ 437], 5.00th=[ 701], 10.00th=[ 1188], 20.00th=[ 1336], 00:19:17.340 | 30.00th=[ 1418], 40.00th=[ 1516], 50.00th=[ 1598], 60.00th=[ 1680], 00:19:17.340 | 70.00th=[ 1778], 80.00th=[ 1909], 90.00th=[ 2147], 95.00th=[ 2442], 00:19:17.340 | 99.00th=[ 2966], 99.50th=[ 3163], 99.90th=[ 3752], 99.95th=[ 4015], 00:19:17.340 | 99.99th=[ 4424] 00:19:17.340 bw ( KiB/s): min=156384, max=160872, per=99.74%, avg=158042.67, stdev=2462.39, samples=3 00:19:17.340 iops : min=39096, max=40218, avg=39510.67, stdev=615.60, samples=3 00:19:17.340 write: IOPS=39.4k, BW=154MiB/s (161MB/s)(308MiB/2001msec); 0 zone resets 00:19:17.340 slat (nsec): min=381, max=44652, avg=944.33, stdev=1901.81 00:19:17.340 clat (usec): min=301, max=4495, avg=1616.26, stdev=466.69 00:19:17.340 lat (usec): min=302, max=4495, avg=1617.21, stdev=466.63 00:19:17.340 clat percentiles (usec): 00:19:17.340 | 1.00th=[ 441], 5.00th=[ 693], 10.00th=[ 1188], 20.00th=[ 1336], 00:19:17.340 | 30.00th=[ 1434], 40.00th=[ 1516], 50.00th=[ 1598], 60.00th=[ 1680], 00:19:17.340 | 70.00th=[ 1778], 80.00th=[ 1893], 90.00th=[ 2147], 95.00th=[ 2442], 00:19:17.340 | 99.00th=[ 2966], 99.50th=[ 3163], 99.90th=[ 3851], 99.95th=[ 4015], 00:19:17.340 | 99.99th=[ 4424] 00:19:17.340 bw ( KiB/s): min=155952, max=159752, per=99.94%, avg=157589.33, stdev=1953.71, samples=3 00:19:17.340 iops : min=38988, max=39938, avg=39397.33, stdev=488.43, samples=3 00:19:17.340 lat (usec) : 500=2.08%, 750=3.40%, 1000=1.89% 00:19:17.340 lat (msec) : 2=77.46%, 4=15.11%, 10=0.06% 00:19:17.340 cpu : usr=99.95%, sys=0.00%, ctx=23, majf=0, minf=2 00:19:17.340 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:19:17.340 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:19:17.340 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:19:17.340 issued rwts: total=79267,78883,0,0 short=0,0,0,0 dropped=0,0,0,0 00:19:17.340 latency : target=0, window=0, percentile=100.00%, depth=128 00:19:17.340 00:19:17.340 Run status group 0 (all jobs): 00:19:17.340 READ: bw=155MiB/s (162MB/s), 155MiB/s-155MiB/s (162MB/s-162MB/s), io=310MiB (325MB), run=2001-2001msec 00:19:17.340 WRITE: bw=154MiB/s (161MB/s), 154MiB/s-154MiB/s (161MB/s-161MB/s), io=308MiB (323MB), run=2001-2001msec 00:19:17.904 21:18:29 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:19:17.904 21:18:29 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:19:17.904 00:19:17.904 real 0m4.933s 00:19:17.904 user 0m2.383s 00:19:17.904 sys 0m2.470s 00:19:17.904 21:18:29 nvme.nvme_fio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:17.904 ************************************ 00:19:17.904 END TEST nvme_fio 00:19:17.904 ************************************ 00:19:17.904 21:18:29 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:19:17.904 21:18:29 nvme -- common/autotest_common.sh@1142 -- # return 0 00:19:17.904 00:19:17.904 real 0m25.088s 00:19:17.904 user 0m30.653s 00:19:17.904 sys 0m11.951s 00:19:17.904 21:18:29 nvme -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:17.904 21:18:29 nvme -- common/autotest_common.sh@10 -- # set +x 00:19:17.904 ************************************ 00:19:17.904 END TEST nvme 00:19:17.904 ************************************ 00:19:17.904 21:18:29 -- common/autotest_common.sh@1142 -- # return 0 00:19:17.904 21:18:29 -- spdk/autotest.sh@217 -- # [[ 0 -eq 1 ]] 00:19:17.904 21:18:29 -- spdk/autotest.sh@221 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:19:17.904 21:18:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:17.904 21:18:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:17.904 21:18:29 -- common/autotest_common.sh@10 -- # set +x 00:19:17.904 ************************************ 00:19:17.904 START TEST nvme_scc 00:19:17.904 ************************************ 00:19:17.904 21:18:29 nvme_scc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:19:17.904 * Looking for test storage... 00:19:17.904 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:17.904 21:18:29 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:19:17.904 21:18:29 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:19:17.904 21:18:29 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:19:17.904 21:18:29 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:17.904 21:18:29 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:18.163 21:18:29 nvme_scc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:18.163 21:18:29 nvme_scc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:18.163 21:18:29 nvme_scc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:18.163 21:18:29 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:18.163 21:18:29 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:18.163 21:18:29 nvme_scc -- paths/export.sh@4 -- # export PATH 00:19:18.163 21:18:29 nvme_scc -- paths/export.sh@5 -- # echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:18.163 21:18:29 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:19:18.163 21:18:29 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:19:18.163 21:18:29 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:19:18.163 21:18:29 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:19:18.163 21:18:29 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:19:18.163 21:18:29 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:19:18.164 21:18:29 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:19:18.164 21:18:29 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:19:18.164 21:18:29 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:19:18.164 21:18:29 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.164 21:18:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:19:18.164 21:18:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ FreeBSD == Linux ]] 00:19:18.164 21:18:29 nvme_scc -- nvme/nvme_scc.sh@12 -- # exit 0 00:19:18.164 00:19:18.164 real 0m0.171s 00:19:18.164 user 0m0.116s 00:19:18.164 sys 0m0.124s 00:19:18.164 21:18:29 nvme_scc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:18.164 21:18:29 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:19:18.164 ************************************ 00:19:18.164 END TEST nvme_scc 00:19:18.164 ************************************ 00:19:18.164 21:18:29 -- common/autotest_common.sh@1142 -- # return 0 00:19:18.164 21:18:29 -- spdk/autotest.sh@223 -- # [[ 0 -eq 1 ]] 00:19:18.164 21:18:29 -- spdk/autotest.sh@226 -- # [[ 0 -eq 1 ]] 00:19:18.164 21:18:29 -- spdk/autotest.sh@229 -- # [[ '' -eq 1 ]] 00:19:18.164 21:18:29 -- spdk/autotest.sh@232 -- # [[ 0 -eq 1 ]] 00:19:18.164 21:18:29 -- spdk/autotest.sh@236 -- # [[ '' -eq 1 ]] 00:19:18.164 21:18:29 -- spdk/autotest.sh@240 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:19:18.164 21:18:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:18.164 21:18:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:18.164 21:18:29 -- common/autotest_common.sh@10 -- # set +x 00:19:18.164 ************************************ 00:19:18.164 START TEST nvme_rpc 00:19:18.164 ************************************ 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:19:18.164 * Looking for test storage... 00:19:18.164 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:18.164 21:18:29 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:18.164 21:18:29 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@1524 -- # bdfs=() 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@1524 -- # local bdfs 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@1513 -- # bdfs=() 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@1513 -- # local bdfs 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@1514 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:00:10.0 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@1527 -- # echo 0000:00:10.0 00:19:18.164 21:18:29 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:19:18.164 21:18:29 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:19:18.164 21:18:29 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=69099 00:19:18.164 21:18:29 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:19:18.164 21:18:29 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 69099 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@829 -- # '[' -z 69099 ']' 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:18.164 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:18.164 21:18:29 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:18.164 [2024-07-14 21:18:29.676184] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:18.164 [2024-07-14 21:18:29.676392] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:18.731 EAL: TSC is not safe to use in SMP mode 00:19:18.731 EAL: TSC is not invariant 00:19:18.731 [2024-07-14 21:18:30.209015] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:18.989 [2024-07-14 21:18:30.292713] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:18.989 [2024-07-14 21:18:30.292778] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:18.989 [2024-07-14 21:18:30.295658] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:18.989 [2024-07-14 21:18:30.295649] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:19.247 21:18:30 nvme_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:19.247 21:18:30 nvme_rpc -- common/autotest_common.sh@862 -- # return 0 00:19:19.247 21:18:30 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:19:19.505 [2024-07-14 21:18:30.839175] pci_event.c: 228:spdk_pci_event_listen: *ERROR*: Non-Linux does not support this operation 00:19:19.505 Nvme0n1 00:19:19.505 21:18:30 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:19:19.505 21:18:30 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:19:19.762 request: 00:19:19.762 { 00:19:19.762 "bdev_name": "Nvme0n1", 00:19:19.762 "filename": "non_existing_file", 00:19:19.762 "method": "bdev_nvme_apply_firmware", 00:19:19.762 "req_id": 1 00:19:19.762 } 00:19:19.762 Got JSON-RPC error response 00:19:19.762 response: 00:19:19.762 { 00:19:19.762 "code": -32603, 00:19:19.762 "message": "open file failed." 00:19:19.762 } 00:19:19.762 21:18:31 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:19:19.762 21:18:31 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:19:19.762 21:18:31 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:19:20.021 21:18:31 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:19:20.021 21:18:31 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 69099 00:19:20.021 21:18:31 nvme_rpc -- common/autotest_common.sh@948 -- # '[' -z 69099 ']' 00:19:20.021 21:18:31 nvme_rpc -- common/autotest_common.sh@952 -- # kill -0 69099 00:19:20.021 21:18:31 nvme_rpc -- common/autotest_common.sh@953 -- # uname 00:19:20.021 21:18:31 nvme_rpc -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:20.021 21:18:31 nvme_rpc -- common/autotest_common.sh@956 -- # ps -c -o command 69099 00:19:20.021 21:18:31 nvme_rpc -- common/autotest_common.sh@956 -- # tail -1 00:19:20.021 21:18:31 nvme_rpc -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:19:20.021 killing process with pid 69099 00:19:20.021 21:18:31 nvme_rpc -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:19:20.021 21:18:31 nvme_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69099' 00:19:20.021 21:18:31 nvme_rpc -- common/autotest_common.sh@967 -- # kill 69099 00:19:20.021 21:18:31 nvme_rpc -- common/autotest_common.sh@972 -- # wait 69099 00:19:20.280 00:19:20.280 real 0m2.197s 00:19:20.280 user 0m3.973s 00:19:20.280 sys 0m0.757s 00:19:20.280 21:18:31 nvme_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:20.280 21:18:31 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:19:20.280 ************************************ 00:19:20.280 END TEST nvme_rpc 00:19:20.280 ************************************ 00:19:20.280 21:18:31 -- common/autotest_common.sh@1142 -- # return 0 00:19:20.280 21:18:31 -- spdk/autotest.sh@241 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:19:20.280 21:18:31 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:19:20.280 21:18:31 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:19:20.280 21:18:31 -- common/autotest_common.sh@10 -- # set +x 00:19:20.280 ************************************ 00:19:20.280 START TEST nvme_rpc_timeouts 00:19:20.280 ************************************ 00:19:20.280 21:18:31 nvme_rpc_timeouts -- common/autotest_common.sh@1123 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:19:20.537 * Looking for test storage... 00:19:20.537 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:19:20.537 21:18:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:20.537 21:18:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_69136 00:19:20.537 21:18:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_69136 00:19:20.537 21:18:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=69164 00:19:20.537 21:18:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:19:20.537 21:18:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:19:20.537 21:18:31 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 69164 00:19:20.537 21:18:31 nvme_rpc_timeouts -- common/autotest_common.sh@829 -- # '[' -z 69164 ']' 00:19:20.537 21:18:31 nvme_rpc_timeouts -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:20.537 21:18:31 nvme_rpc_timeouts -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:20.537 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:20.537 21:18:31 nvme_rpc_timeouts -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:20.537 21:18:31 nvme_rpc_timeouts -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:20.537 21:18:31 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:19:20.537 [2024-07-14 21:18:31.881770] Starting SPDK v24.09-pre git sha1 719d03c6a / DPDK 24.03.0 initialization... 00:19:20.537 [2024-07-14 21:18:31.882020] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 ] 00:19:21.126 EAL: TSC is not safe to use in SMP mode 00:19:21.126 EAL: TSC is not invariant 00:19:21.126 [2024-07-14 21:18:32.392664] app.c: 908:spdk_app_start: *NOTICE*: Total cores available: 2 00:19:21.126 [2024-07-14 21:18:32.467874] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 0]. 00:19:21.126 [2024-07-14 21:18:32.467935] app.c: 927:spdk_app_start: *NOTICE*: Unable to parse /proc/stat [core: 1]. 00:19:21.127 [2024-07-14 21:18:32.471026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:21.127 [2024-07-14 21:18:32.471020] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:21.385 21:18:32 nvme_rpc_timeouts -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:21.385 21:18:32 nvme_rpc_timeouts -- common/autotest_common.sh@862 -- # return 0 00:19:21.385 Checking default timeout settings: 00:19:21.385 21:18:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:19:21.385 21:18:32 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:21.950 Making settings changes with rpc: 00:19:21.950 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:19:21.950 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:19:22.208 Check default vs. modified settings: 00:19:22.208 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:19:22.208 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:19:22.466 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:19:22.466 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:22.466 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:22.466 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_69136 00:19:22.466 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:22.466 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:19:22.466 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_69136 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:19:22.467 Setting action_on_timeout is changed as expected. 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_69136 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_69136 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:19:22.467 Setting timeout_us is changed as expected. 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_69136 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_69136 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:19:22.467 Setting timeout_admin_us is changed as expected. 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_69136 /tmp/settings_modified_69136 00:19:22.467 21:18:33 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 69164 00:19:22.467 21:18:33 nvme_rpc_timeouts -- common/autotest_common.sh@948 -- # '[' -z 69164 ']' 00:19:22.467 21:18:33 nvme_rpc_timeouts -- common/autotest_common.sh@952 -- # kill -0 69164 00:19:22.467 21:18:33 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # uname 00:19:22.467 21:18:33 nvme_rpc_timeouts -- common/autotest_common.sh@953 -- # '[' FreeBSD = Linux ']' 00:19:22.467 21:18:33 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # ps -c -o command 69164 00:19:22.467 21:18:33 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # tail -1 00:19:22.467 21:18:33 nvme_rpc_timeouts -- common/autotest_common.sh@956 -- # process_name=spdk_tgt 00:19:22.467 21:18:33 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # '[' spdk_tgt = sudo ']' 00:19:22.467 21:18:33 nvme_rpc_timeouts -- common/autotest_common.sh@966 -- # echo 'killing process with pid 69164' 00:19:22.467 killing process with pid 69164 00:19:22.467 21:18:33 nvme_rpc_timeouts -- common/autotest_common.sh@967 -- # kill 69164 00:19:22.467 21:18:33 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # wait 69164 00:19:22.726 RPC TIMEOUT SETTING TEST PASSED. 00:19:22.726 21:18:34 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:19:22.726 00:19:22.726 real 0m2.365s 00:19:22.726 user 0m4.539s 00:19:22.726 sys 0m0.702s 00:19:22.726 21:18:34 nvme_rpc_timeouts -- common/autotest_common.sh@1124 -- # xtrace_disable 00:19:22.726 21:18:34 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:19:22.726 ************************************ 00:19:22.726 END TEST nvme_rpc_timeouts 00:19:22.726 ************************************ 00:19:22.726 21:18:34 -- common/autotest_common.sh@1142 -- # return 0 00:19:22.726 21:18:34 -- spdk/autotest.sh@243 -- # uname -s 00:19:22.726 21:18:34 -- spdk/autotest.sh@243 -- # '[' FreeBSD = Linux ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@247 -- # [[ 0 -eq 1 ]] 00:19:22.726 21:18:34 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@260 -- # timing_exit lib 00:19:22.726 21:18:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:22.726 21:18:34 -- common/autotest_common.sh@10 -- # set +x 00:19:22.726 21:18:34 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@279 -- # '[' 0 -eq 1 ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:19:22.726 21:18:34 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:19:22.726 21:18:34 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:19:22.726 21:18:34 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:19:22.726 21:18:34 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:19:22.726 21:18:34 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:19:22.726 21:18:34 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:19:22.726 21:18:34 -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:22.726 21:18:34 -- common/autotest_common.sh@10 -- # set +x 00:19:22.726 21:18:34 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:19:22.726 21:18:34 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:19:22.726 21:18:34 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:19:22.726 21:18:34 -- common/autotest_common.sh@10 -- # set +x 00:19:23.292 setup.sh cleanup function not yet supported on FreeBSD 00:19:23.292 21:18:34 -- common/autotest_common.sh@1451 -- # return 0 00:19:23.292 21:18:34 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:19:23.292 21:18:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:23.292 21:18:34 -- common/autotest_common.sh@10 -- # set +x 00:19:23.292 21:18:34 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:19:23.292 21:18:34 -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:23.292 21:18:34 -- common/autotest_common.sh@10 -- # set +x 00:19:23.292 21:18:34 -- spdk/autotest.sh@387 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:19:23.292 21:18:34 -- spdk/autotest.sh@389 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:19:23.292 21:18:34 -- spdk/autotest.sh@391 -- # hash lcov 00:19:23.292 /home/vagrant/spdk_repo/spdk/autotest.sh: line 391: hash: lcov: not found 00:19:23.550 21:18:34 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:19:23.550 21:18:34 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:19:23.550 21:18:34 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:23.550 21:18:34 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:23.550 21:18:34 -- paths/export.sh@2 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:23.550 21:18:34 -- paths/export.sh@3 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:23.550 21:18:34 -- paths/export.sh@4 -- $ export PATH 00:19:23.550 21:18:34 -- paths/export.sh@5 -- $ echo /opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/go/1.21.1/bin:/sbin:/bin:/usr/sbin:/usr/bin:/usr/local/sbin:/usr/local/bin:/home/vagrant/bin 00:19:23.550 21:18:34 -- common/autobuild_common.sh@443 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:19:23.550 21:18:34 -- common/autobuild_common.sh@444 -- $ date +%s 00:19:23.550 21:18:34 -- common/autobuild_common.sh@444 -- $ mktemp -dt spdk_1720991914.XXXXXX 00:19:23.550 21:18:34 -- common/autobuild_common.sh@444 -- $ SPDK_WORKSPACE=/tmp/spdk_1720991914.XXXXXX.UNgHiIhJnp 00:19:23.550 21:18:34 -- common/autobuild_common.sh@446 -- $ [[ -n '' ]] 00:19:23.550 21:18:34 -- common/autobuild_common.sh@450 -- $ '[' -n '' ']' 00:19:23.550 21:18:34 -- common/autobuild_common.sh@453 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:19:23.550 21:18:34 -- common/autobuild_common.sh@457 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:19:23.550 21:18:34 -- common/autobuild_common.sh@459 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:19:23.550 21:18:34 -- common/autobuild_common.sh@460 -- $ get_config_params 00:19:23.550 21:18:34 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:19:23.550 21:18:34 -- common/autotest_common.sh@10 -- $ set +x 00:19:23.550 21:18:35 -- common/autobuild_common.sh@460 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:19:23.550 21:18:35 -- common/autobuild_common.sh@462 -- $ start_monitor_resources 00:19:23.550 21:18:35 -- pm/common@17 -- $ local monitor 00:19:23.550 21:18:35 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:19:23.550 21:18:35 -- pm/common@25 -- $ sleep 1 00:19:23.550 21:18:35 -- pm/common@21 -- $ date +%s 00:19:23.550 21:18:35 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autopackage.sh.1720991915 00:19:23.550 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autopackage.sh.1720991915_collect-vmstat.pm.log 00:19:24.923 21:18:36 -- common/autobuild_common.sh@463 -- $ trap stop_monitor_resources EXIT 00:19:24.923 21:18:36 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j10 00:19:24.923 21:18:36 -- spdk/autopackage.sh@11 -- $ cd /home/vagrant/spdk_repo/spdk 00:19:24.923 21:18:36 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:19:24.923 21:18:36 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:19:24.923 21:18:36 -- spdk/autopackage.sh@18 -- $ [[ 1 -eq 0 ]] 00:19:24.923 21:18:36 -- spdk/autopackage.sh@23 -- $ timing_enter build_release 00:19:24.923 21:18:36 -- common/autotest_common.sh@722 -- $ xtrace_disable 00:19:24.923 21:18:36 -- common/autotest_common.sh@10 -- $ set +x 00:19:24.923 21:18:36 -- spdk/autopackage.sh@26 -- $ [[ /usr/bin/clang == *clang* ]] 00:19:24.923 21:18:36 -- spdk/autopackage.sh@27 -- $ nproc 00:19:24.923 21:18:36 -- spdk/autopackage.sh@27 -- $ jobs=5 00:19:24.923 21:18:36 -- spdk/autopackage.sh@28 -- $ case "$(uname -s)" in 00:19:24.923 21:18:36 -- spdk/autopackage.sh@28 -- $ uname -s 00:19:24.923 21:18:36 -- spdk/autopackage.sh@28 -- $ case "$(uname -s)" in 00:19:24.923 21:18:36 -- spdk/autopackage.sh@32 -- $ export LD=ld.lld 00:19:24.923 21:18:36 -- spdk/autopackage.sh@32 -- $ LD=ld.lld 00:19:24.923 21:18:36 -- spdk/autopackage.sh@36 -- $ [[ -n '' ]] 00:19:24.923 21:18:36 -- spdk/autopackage.sh@40 -- $ get_config_params 00:19:24.923 21:18:36 -- spdk/autopackage.sh@40 -- $ sed s/--enable-debug//g 00:19:24.923 21:18:36 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:19:24.923 21:18:36 -- common/autotest_common.sh@10 -- $ set +x 00:19:24.923 21:18:36 -- spdk/autopackage.sh@40 -- $ config_params=' --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio' 00:19:24.923 21:18:36 -- spdk/autopackage.sh@41 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --enable-lto --disable-unit-tests 00:19:24.923 Notice: Vhost, rte_vhost library, virtio, and fuse 00:19:24.923 are only supported on Linux. Turning off default feature. 00:19:24.923 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:19:24.923 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:19:24.923 RDMA_OPTION_ID_ACK_TIMEOUT is not supported 00:19:25.182 Using 'verbs' RDMA provider 00:19:33.548 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:19:41.672 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:19:41.672 Creating mk/config.mk...done. 00:19:41.672 Creating mk/cc.flags.mk...done. 00:19:41.672 Type 'gmake' to build. 00:19:41.672 21:18:53 -- spdk/autopackage.sh@43 -- $ gmake -j10 00:19:41.929 gmake[1]: Nothing to be done for 'all'. 00:19:41.929 ps: stdin: not a terminal 00:19:47.196 The Meson build system 00:19:47.196 Version: 1.4.0 00:19:47.196 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:19:47.196 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:19:47.196 Build type: native build 00:19:47.196 Program cat found: YES (/bin/cat) 00:19:47.196 Project name: DPDK 00:19:47.196 Project version: 24.03.0 00:19:47.196 C compiler for the host machine: /usr/bin/clang (clang 16.0.6 "FreeBSD clang version 16.0.6 (https://github.com/llvm/llvm-project.git llvmorg-16.0.6-0-g7cbf1a259152)") 00:19:47.196 C linker for the host machine: /usr/bin/clang ld.lld 16.0.6 00:19:47.196 Host machine cpu family: x86_64 00:19:47.196 Host machine cpu: x86_64 00:19:47.196 Message: ## Building in Developer Mode ## 00:19:47.196 Program pkg-config found: YES (/usr/local/bin/pkg-config) 00:19:47.196 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:19:47.196 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:19:47.196 Program python3 found: YES (/usr/local/bin/python3.9) 00:19:47.196 Program cat found: YES (/bin/cat) 00:19:47.196 Compiler for C supports arguments -march=native: YES 00:19:47.196 Checking for size of "void *" : 8 00:19:47.196 Checking for size of "void *" : 8 (cached) 00:19:47.196 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:19:47.196 Library m found: YES 00:19:47.196 Library numa found: NO 00:19:47.196 Library fdt found: NO 00:19:47.196 Library execinfo found: YES 00:19:47.196 Has header "execinfo.h" : YES 00:19:47.196 Found pkg-config: YES (/usr/local/bin/pkg-config) 2.2.0 00:19:47.196 Run-time dependency libarchive found: NO (tried pkgconfig) 00:19:47.196 Run-time dependency libbsd found: NO (tried pkgconfig) 00:19:47.196 Run-time dependency jansson found: NO (tried pkgconfig) 00:19:47.196 Run-time dependency openssl found: YES 3.0.13 00:19:47.196 Run-time dependency libpcap found: NO (tried pkgconfig) 00:19:47.196 Library pcap found: YES 00:19:47.196 Has header "pcap.h" with dependency -lpcap: YES 00:19:47.196 Compiler for C supports arguments -Wcast-qual: YES 00:19:47.196 Compiler for C supports arguments -Wdeprecated: YES 00:19:47.196 Compiler for C supports arguments -Wformat: YES 00:19:47.196 Compiler for C supports arguments -Wformat-nonliteral: YES 00:19:47.196 Compiler for C supports arguments -Wformat-security: YES 00:19:47.196 Compiler for C supports arguments -Wmissing-declarations: YES 00:19:47.196 Compiler for C supports arguments -Wmissing-prototypes: YES 00:19:47.196 Compiler for C supports arguments -Wnested-externs: YES 00:19:47.196 Compiler for C supports arguments -Wold-style-definition: YES 00:19:47.196 Compiler for C supports arguments -Wpointer-arith: YES 00:19:47.196 Compiler for C supports arguments -Wsign-compare: YES 00:19:47.196 Compiler for C supports arguments -Wstrict-prototypes: YES 00:19:47.196 Compiler for C supports arguments -Wundef: YES 00:19:47.196 Compiler for C supports arguments -Wwrite-strings: YES 00:19:47.196 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:19:47.196 Compiler for C supports arguments -Wno-packed-not-aligned: NO 00:19:47.196 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:19:47.196 Compiler for C supports arguments -mavx512f: YES 00:19:47.196 Checking if "AVX512 checking" compiles: YES 00:19:47.196 Fetching value of define "__SSE4_2__" : 1 00:19:47.196 Fetching value of define "__AES__" : 1 00:19:47.196 Fetching value of define "__AVX__" : 1 00:19:47.196 Fetching value of define "__AVX2__" : 1 00:19:47.196 Fetching value of define "__AVX512BW__" : (undefined) 00:19:47.196 Fetching value of define "__AVX512CD__" : (undefined) 00:19:47.196 Fetching value of define "__AVX512DQ__" : (undefined) 00:19:47.196 Fetching value of define "__AVX512F__" : (undefined) 00:19:47.197 Fetching value of define "__AVX512VL__" : (undefined) 00:19:47.197 Fetching value of define "__PCLMUL__" : 1 00:19:47.197 Fetching value of define "__RDRND__" : 1 00:19:47.197 Fetching value of define "__RDSEED__" : 1 00:19:47.197 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:19:47.197 Fetching value of define "__znver1__" : (undefined) 00:19:47.197 Fetching value of define "__znver2__" : (undefined) 00:19:47.197 Fetching value of define "__znver3__" : (undefined) 00:19:47.197 Fetching value of define "__znver4__" : (undefined) 00:19:47.197 Compiler for C supports arguments -Wno-format-truncation: NO 00:19:47.197 Message: lib/log: Defining dependency "log" 00:19:47.197 Message: lib/kvargs: Defining dependency "kvargs" 00:19:47.197 Message: lib/telemetry: Defining dependency "telemetry" 00:19:47.197 Checking if "Detect argument count for CPU_OR" compiles: YES 00:19:47.197 Checking for function "getentropy" : YES 00:19:47.197 Message: lib/eal: Defining dependency "eal" 00:19:47.197 Message: lib/ring: Defining dependency "ring" 00:19:47.197 Message: lib/rcu: Defining dependency "rcu" 00:19:47.197 Message: lib/mempool: Defining dependency "mempool" 00:19:47.197 Message: lib/mbuf: Defining dependency "mbuf" 00:19:47.197 Fetching value of define "__PCLMUL__" : 1 (cached) 00:19:47.197 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:19:47.197 Compiler for C supports arguments -mpclmul: YES 00:19:47.197 Compiler for C supports arguments -maes: YES 00:19:47.197 Compiler for C supports arguments -mavx512f: YES (cached) 00:19:47.197 Compiler for C supports arguments -mavx512bw: YES 00:19:47.197 Compiler for C supports arguments -mavx512dq: YES 00:19:47.197 Compiler for C supports arguments -mavx512vl: YES 00:19:47.197 Compiler for C supports arguments -mvpclmulqdq: YES 00:19:47.197 Compiler for C supports arguments -mavx2: YES 00:19:47.197 Compiler for C supports arguments -mavx: YES 00:19:47.197 Message: lib/net: Defining dependency "net" 00:19:47.197 Message: lib/meter: Defining dependency "meter" 00:19:47.197 Message: lib/ethdev: Defining dependency "ethdev" 00:19:47.197 Message: lib/pci: Defining dependency "pci" 00:19:47.197 Message: lib/cmdline: Defining dependency "cmdline" 00:19:47.197 Message: lib/hash: Defining dependency "hash" 00:19:47.197 Message: lib/timer: Defining dependency "timer" 00:19:47.197 Message: lib/compressdev: Defining dependency "compressdev" 00:19:47.197 Message: lib/cryptodev: Defining dependency "cryptodev" 00:19:47.197 Message: lib/dmadev: Defining dependency "dmadev" 00:19:47.197 Compiler for C supports arguments -Wno-cast-qual: YES 00:19:47.197 Message: lib/reorder: Defining dependency "reorder" 00:19:47.197 Message: lib/security: Defining dependency "security" 00:19:47.197 Has header "linux/userfaultfd.h" : NO 00:19:47.197 Has header "linux/vduse.h" : NO 00:19:47.197 Compiler for C supports arguments -Wno-format-truncation: NO (cached) 00:19:47.197 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:19:47.197 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:19:47.197 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:19:47.197 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:19:47.197 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:19:47.197 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:19:47.197 Message: Disabling vdpa/* drivers: missing internal dependency "vhost" 00:19:47.197 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:19:47.197 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:19:47.197 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:19:47.197 Program doxygen found: YES (/usr/local/bin/doxygen) 00:19:47.197 Configuring doxy-api-html.conf using configuration 00:19:47.197 Configuring doxy-api-man.conf using configuration 00:19:47.197 Program mandb found: NO 00:19:47.197 Program sphinx-build found: NO 00:19:47.197 Configuring rte_build_config.h using configuration 00:19:47.197 Message: 00:19:47.197 ================= 00:19:47.197 Applications Enabled 00:19:47.197 ================= 00:19:47.197 00:19:47.197 apps: 00:19:47.197 00:19:47.197 00:19:47.197 Message: 00:19:47.197 ================= 00:19:47.197 Libraries Enabled 00:19:47.197 ================= 00:19:47.197 00:19:47.197 libs: 00:19:47.197 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:19:47.197 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:19:47.197 cryptodev, dmadev, reorder, security, 00:19:47.197 00:19:47.197 Message: 00:19:47.197 =============== 00:19:47.197 Drivers Enabled 00:19:47.197 =============== 00:19:47.197 00:19:47.197 common: 00:19:47.197 00:19:47.197 bus: 00:19:47.197 pci, vdev, 00:19:47.197 mempool: 00:19:47.197 ring, 00:19:47.197 dma: 00:19:47.197 00:19:47.197 net: 00:19:47.197 00:19:47.197 crypto: 00:19:47.197 00:19:47.197 compress: 00:19:47.197 00:19:47.197 00:19:47.197 Message: 00:19:47.197 ================= 00:19:47.197 Content Skipped 00:19:47.197 ================= 00:19:47.197 00:19:47.197 apps: 00:19:47.197 dumpcap: explicitly disabled via build config 00:19:47.197 graph: explicitly disabled via build config 00:19:47.197 pdump: explicitly disabled via build config 00:19:47.197 proc-info: explicitly disabled via build config 00:19:47.197 test-acl: explicitly disabled via build config 00:19:47.197 test-bbdev: explicitly disabled via build config 00:19:47.197 test-cmdline: explicitly disabled via build config 00:19:47.197 test-compress-perf: explicitly disabled via build config 00:19:47.197 test-crypto-perf: explicitly disabled via build config 00:19:47.197 test-dma-perf: explicitly disabled via build config 00:19:47.197 test-eventdev: explicitly disabled via build config 00:19:47.197 test-fib: explicitly disabled via build config 00:19:47.197 test-flow-perf: explicitly disabled via build config 00:19:47.197 test-gpudev: explicitly disabled via build config 00:19:47.197 test-mldev: explicitly disabled via build config 00:19:47.197 test-pipeline: explicitly disabled via build config 00:19:47.197 test-pmd: explicitly disabled via build config 00:19:47.197 test-regex: explicitly disabled via build config 00:19:47.197 test-sad: explicitly disabled via build config 00:19:47.197 test-security-perf: explicitly disabled via build config 00:19:47.197 00:19:47.197 libs: 00:19:47.197 argparse: explicitly disabled via build config 00:19:47.197 metrics: explicitly disabled via build config 00:19:47.197 acl: explicitly disabled via build config 00:19:47.197 bbdev: explicitly disabled via build config 00:19:47.197 bitratestats: explicitly disabled via build config 00:19:47.197 bpf: explicitly disabled via build config 00:19:47.197 cfgfile: explicitly disabled via build config 00:19:47.197 distributor: explicitly disabled via build config 00:19:47.197 efd: explicitly disabled via build config 00:19:47.197 eventdev: explicitly disabled via build config 00:19:47.197 dispatcher: explicitly disabled via build config 00:19:47.197 gpudev: explicitly disabled via build config 00:19:47.197 gro: explicitly disabled via build config 00:19:47.197 gso: explicitly disabled via build config 00:19:47.197 ip_frag: explicitly disabled via build config 00:19:47.197 jobstats: explicitly disabled via build config 00:19:47.197 latencystats: explicitly disabled via build config 00:19:47.197 lpm: explicitly disabled via build config 00:19:47.197 member: explicitly disabled via build config 00:19:47.197 pcapng: explicitly disabled via build config 00:19:47.197 power: only supported on Linux 00:19:47.197 rawdev: explicitly disabled via build config 00:19:47.197 regexdev: explicitly disabled via build config 00:19:47.197 mldev: explicitly disabled via build config 00:19:47.197 rib: explicitly disabled via build config 00:19:47.197 sched: explicitly disabled via build config 00:19:47.197 stack: explicitly disabled via build config 00:19:47.197 vhost: only supported on Linux 00:19:47.197 ipsec: explicitly disabled via build config 00:19:47.197 pdcp: explicitly disabled via build config 00:19:47.197 fib: explicitly disabled via build config 00:19:47.197 port: explicitly disabled via build config 00:19:47.197 pdump: explicitly disabled via build config 00:19:47.197 table: explicitly disabled via build config 00:19:47.197 pipeline: explicitly disabled via build config 00:19:47.197 graph: explicitly disabled via build config 00:19:47.197 node: explicitly disabled via build config 00:19:47.197 00:19:47.197 drivers: 00:19:47.197 common/cpt: not in enabled drivers build config 00:19:47.197 common/dpaax: not in enabled drivers build config 00:19:47.197 common/iavf: not in enabled drivers build config 00:19:47.197 common/idpf: not in enabled drivers build config 00:19:47.197 common/ionic: not in enabled drivers build config 00:19:47.197 common/mvep: not in enabled drivers build config 00:19:47.197 common/octeontx: not in enabled drivers build config 00:19:47.197 bus/auxiliary: not in enabled drivers build config 00:19:47.197 bus/cdx: not in enabled drivers build config 00:19:47.197 bus/dpaa: not in enabled drivers build config 00:19:47.197 bus/fslmc: not in enabled drivers build config 00:19:47.197 bus/ifpga: not in enabled drivers build config 00:19:47.197 bus/platform: not in enabled drivers build config 00:19:47.197 bus/uacce: not in enabled drivers build config 00:19:47.197 bus/vmbus: not in enabled drivers build config 00:19:47.197 common/cnxk: not in enabled drivers build config 00:19:47.197 common/mlx5: not in enabled drivers build config 00:19:47.197 common/nfp: not in enabled drivers build config 00:19:47.197 common/nitrox: not in enabled drivers build config 00:19:47.197 common/qat: not in enabled drivers build config 00:19:47.197 common/sfc_efx: not in enabled drivers build config 00:19:47.197 mempool/bucket: not in enabled drivers build config 00:19:47.197 mempool/cnxk: not in enabled drivers build config 00:19:47.197 mempool/dpaa: not in enabled drivers build config 00:19:47.197 mempool/dpaa2: not in enabled drivers build config 00:19:47.197 mempool/octeontx: not in enabled drivers build config 00:19:47.197 mempool/stack: not in enabled drivers build config 00:19:47.197 dma/cnxk: not in enabled drivers build config 00:19:47.197 dma/dpaa: not in enabled drivers build config 00:19:47.197 dma/dpaa2: not in enabled drivers build config 00:19:47.197 dma/hisilicon: not in enabled drivers build config 00:19:47.197 dma/idxd: not in enabled drivers build config 00:19:47.197 dma/ioat: not in enabled drivers build config 00:19:47.197 dma/skeleton: not in enabled drivers build config 00:19:47.197 net/af_packet: not in enabled drivers build config 00:19:47.197 net/af_xdp: not in enabled drivers build config 00:19:47.197 net/ark: not in enabled drivers build config 00:19:47.197 net/atlantic: not in enabled drivers build config 00:19:47.197 net/avp: not in enabled drivers build config 00:19:47.197 net/axgbe: not in enabled drivers build config 00:19:47.197 net/bnx2x: not in enabled drivers build config 00:19:47.197 net/bnxt: not in enabled drivers build config 00:19:47.198 net/bonding: not in enabled drivers build config 00:19:47.198 net/cnxk: not in enabled drivers build config 00:19:47.198 net/cpfl: not in enabled drivers build config 00:19:47.198 net/cxgbe: not in enabled drivers build config 00:19:47.198 net/dpaa: not in enabled drivers build config 00:19:47.198 net/dpaa2: not in enabled drivers build config 00:19:47.198 net/e1000: not in enabled drivers build config 00:19:47.198 net/ena: not in enabled drivers build config 00:19:47.198 net/enetc: not in enabled drivers build config 00:19:47.198 net/enetfec: not in enabled drivers build config 00:19:47.198 net/enic: not in enabled drivers build config 00:19:47.198 net/failsafe: not in enabled drivers build config 00:19:47.198 net/fm10k: not in enabled drivers build config 00:19:47.198 net/gve: not in enabled drivers build config 00:19:47.198 net/hinic: not in enabled drivers build config 00:19:47.198 net/hns3: not in enabled drivers build config 00:19:47.198 net/i40e: not in enabled drivers build config 00:19:47.198 net/iavf: not in enabled drivers build config 00:19:47.198 net/ice: not in enabled drivers build config 00:19:47.198 net/idpf: not in enabled drivers build config 00:19:47.198 net/igc: not in enabled drivers build config 00:19:47.198 net/ionic: not in enabled drivers build config 00:19:47.198 net/ipn3ke: not in enabled drivers build config 00:19:47.198 net/ixgbe: not in enabled drivers build config 00:19:47.198 net/mana: not in enabled drivers build config 00:19:47.198 net/memif: not in enabled drivers build config 00:19:47.198 net/mlx4: not in enabled drivers build config 00:19:47.198 net/mlx5: not in enabled drivers build config 00:19:47.198 net/mvneta: not in enabled drivers build config 00:19:47.198 net/mvpp2: not in enabled drivers build config 00:19:47.198 net/netvsc: not in enabled drivers build config 00:19:47.198 net/nfb: not in enabled drivers build config 00:19:47.198 net/nfp: not in enabled drivers build config 00:19:47.198 net/ngbe: not in enabled drivers build config 00:19:47.198 net/null: not in enabled drivers build config 00:19:47.198 net/octeontx: not in enabled drivers build config 00:19:47.198 net/octeon_ep: not in enabled drivers build config 00:19:47.198 net/pcap: not in enabled drivers build config 00:19:47.198 net/pfe: not in enabled drivers build config 00:19:47.198 net/qede: not in enabled drivers build config 00:19:47.198 net/ring: not in enabled drivers build config 00:19:47.198 net/sfc: not in enabled drivers build config 00:19:47.198 net/softnic: not in enabled drivers build config 00:19:47.198 net/tap: not in enabled drivers build config 00:19:47.198 net/thunderx: not in enabled drivers build config 00:19:47.198 net/txgbe: not in enabled drivers build config 00:19:47.198 net/vdev_netvsc: not in enabled drivers build config 00:19:47.198 net/vhost: not in enabled drivers build config 00:19:47.198 net/virtio: not in enabled drivers build config 00:19:47.198 net/vmxnet3: not in enabled drivers build config 00:19:47.198 raw/*: missing internal dependency, "rawdev" 00:19:47.198 crypto/armv8: not in enabled drivers build config 00:19:47.198 crypto/bcmfs: not in enabled drivers build config 00:19:47.198 crypto/caam_jr: not in enabled drivers build config 00:19:47.198 crypto/ccp: not in enabled drivers build config 00:19:47.198 crypto/cnxk: not in enabled drivers build config 00:19:47.198 crypto/dpaa_sec: not in enabled drivers build config 00:19:47.198 crypto/dpaa2_sec: not in enabled drivers build config 00:19:47.198 crypto/ipsec_mb: not in enabled drivers build config 00:19:47.198 crypto/mlx5: not in enabled drivers build config 00:19:47.198 crypto/mvsam: not in enabled drivers build config 00:19:47.198 crypto/nitrox: not in enabled drivers build config 00:19:47.198 crypto/null: not in enabled drivers build config 00:19:47.198 crypto/octeontx: not in enabled drivers build config 00:19:47.198 crypto/openssl: not in enabled drivers build config 00:19:47.198 crypto/scheduler: not in enabled drivers build config 00:19:47.198 crypto/uadk: not in enabled drivers build config 00:19:47.198 crypto/virtio: not in enabled drivers build config 00:19:47.198 compress/isal: not in enabled drivers build config 00:19:47.198 compress/mlx5: not in enabled drivers build config 00:19:47.198 compress/nitrox: not in enabled drivers build config 00:19:47.198 compress/octeontx: not in enabled drivers build config 00:19:47.198 compress/zlib: not in enabled drivers build config 00:19:47.198 regex/*: missing internal dependency, "regexdev" 00:19:47.198 ml/*: missing internal dependency, "mldev" 00:19:47.198 vdpa/*: missing internal dependency, "vhost" 00:19:47.198 event/*: missing internal dependency, "eventdev" 00:19:47.198 baseband/*: missing internal dependency, "bbdev" 00:19:47.198 gpu/*: missing internal dependency, "gpudev" 00:19:47.198 00:19:47.198 00:19:47.198 Build targets in project: 81 00:19:47.198 00:19:47.198 DPDK 24.03.0 00:19:47.198 00:19:47.198 User defined options 00:19:47.198 default_library : static 00:19:47.198 libdir : lib 00:19:47.198 prefix : / 00:19:47.198 c_args : -fPIC -Werror 00:19:47.198 c_link_args : 00:19:47.198 cpu_instruction_set: native 00:19:47.198 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:19:47.198 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:19:47.198 enable_docs : false 00:19:47.198 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:19:47.198 enable_kmods : true 00:19:47.198 max_lcores : 128 00:19:47.198 tests : false 00:19:47.198 00:19:47.198 Found ninja-1.11.1 at /usr/local/bin/ninja 00:19:47.198 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:19:47.456 [1/233] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:19:47.456 [2/233] Compiling C object lib/librte_log.a.p/log_log_freebsd.c.o 00:19:47.456 [3/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:19:47.456 [4/233] Compiling C object lib/librte_log.a.p/log_log.c.o 00:19:47.456 [5/233] Linking static target lib/librte_kvargs.a 00:19:47.456 [6/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:19:47.456 [7/233] Linking static target lib/librte_log.a 00:19:47.716 [8/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:19:47.716 [9/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:19:47.716 [10/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:19:47.974 [11/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:19:47.974 [12/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:19:47.974 [13/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:19:47.974 [14/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:19:47.974 [15/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:19:47.974 [16/233] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:19:47.974 [17/233] Linking static target lib/librte_telemetry.a 00:19:48.232 [18/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:19:48.232 [19/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:19:48.232 [20/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:19:48.232 [21/233] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:19:48.232 [22/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:19:48.232 [23/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:19:48.491 [24/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:19:48.491 [25/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:19:48.491 [26/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:19:48.491 [27/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:19:48.750 [28/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:19:48.750 [29/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:19:48.750 [30/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:19:48.750 [31/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:19:48.750 [32/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:19:48.750 [33/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:19:48.750 [34/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:19:49.009 [35/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:19:49.009 [36/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:19:49.009 [37/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:19:49.009 [38/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:19:49.009 [39/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:19:49.278 [40/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:19:49.278 [41/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:19:49.278 [42/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:19:49.278 [43/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:19:49.278 [44/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:19:49.552 [45/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:19:49.552 [46/233] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:19:49.552 [47/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:19:49.552 [48/233] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:19:49.552 [49/233] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:19:49.552 [50/233] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:19:49.552 [51/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_cpuflags.c.o 00:19:49.810 [52/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:19:49.810 [53/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:19:49.810 [54/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:19:49.810 [55/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:19:49.810 [56/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:19:49.810 [57/233] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:19:50.069 [58/233] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:19:50.069 [59/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_dev.c.o 00:19:50.069 [60/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_alarm.c.o 00:19:50.069 [61/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:19:50.069 [62/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:19:50.069 [63/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal.c.o 00:19:50.069 [64/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:19:50.069 [65/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_hugepage_info.c.o 00:19:50.327 [66/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_lcore.c.o 00:19:50.327 [67/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memalloc.c.o 00:19:50.327 [68/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_interrupts.c.o 00:19:50.327 [69/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_thread.c.o 00:19:50.327 [70/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_timer.c.o 00:19:50.327 [71/233] Compiling C object lib/librte_eal.a.p/eal_freebsd_eal_memory.c.o 00:19:50.586 [72/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:19:50.586 [73/233] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:19:50.586 [74/233] Linking static target lib/librte_eal.a 00:19:50.586 [75/233] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:19:50.586 [76/233] Linking static target lib/librte_ring.a 00:19:50.586 [77/233] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:19:50.844 [78/233] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:19:50.844 [79/233] Linking target lib/librte_log.so.24.1 00:19:50.844 [80/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:19:50.844 [81/233] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:19:50.844 [82/233] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:19:50.844 [83/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:19:50.844 [84/233] Linking target lib/librte_kvargs.so.24.1 00:19:50.844 [85/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:19:51.102 [86/233] Linking target lib/librte_telemetry.so.24.1 00:19:51.102 [87/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:19:51.102 [88/233] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:19:51.102 [89/233] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:19:51.102 [90/233] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:19:51.102 [91/233] Linking static target lib/librte_mempool.a 00:19:51.361 [92/233] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:19:51.361 [93/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:19:51.361 [94/233] Linking static target lib/librte_rcu.a 00:19:51.361 [95/233] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:19:51.361 [96/233] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:19:51.361 [97/233] Linking static target lib/librte_mbuf.a 00:19:51.361 [98/233] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:19:51.361 [99/233] Linking static target lib/net/libnet_crc_avx512_lib.a 00:19:51.619 [100/233] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:19:51.619 [101/233] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:19:51.619 [102/233] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:19:51.619 [103/233] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:19:51.619 [104/233] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:19:51.619 [105/233] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:19:51.619 [106/233] Linking static target lib/librte_net.a 00:19:51.876 [107/233] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:19:51.876 [108/233] Linking static target lib/librte_meter.a 00:19:51.876 [109/233] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:19:52.132 [110/233] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:19:52.132 [111/233] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:19:52.132 [112/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:19:52.132 [113/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:19:52.132 [114/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:19:52.132 [115/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:19:52.389 [116/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:19:52.647 [117/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:19:52.647 [118/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:19:52.647 [119/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:19:52.904 [120/233] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:19:52.904 [121/233] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:19:52.904 [122/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:19:52.904 [123/233] Linking static target lib/librte_pci.a 00:19:52.904 [124/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:19:52.904 [125/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:19:52.904 [126/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:19:52.904 [127/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:19:52.904 [128/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:19:52.904 [129/233] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:19:53.162 [130/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:19:53.162 [131/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:19:53.162 [132/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:19:53.162 [133/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:19:53.162 [134/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:19:53.162 [135/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:19:53.162 [136/233] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:19:53.162 [137/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:19:53.162 [138/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:19:53.421 [139/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:19:53.421 [140/233] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:19:53.421 [141/233] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:19:53.421 [142/233] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:19:53.421 [143/233] Linking static target lib/librte_cmdline.a 00:19:53.679 [144/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:19:53.679 [145/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:19:53.679 [146/233] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:19:53.937 [147/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:19:53.937 [148/233] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:19:53.937 [149/233] Linking static target lib/librte_timer.a 00:19:53.937 [150/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:19:53.937 [151/233] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:19:53.937 [152/233] Linking static target lib/librte_compressdev.a 00:19:54.196 [153/233] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:19:54.196 [154/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:19:54.196 [155/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:19:54.196 [156/233] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:19:54.455 [157/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:19:54.455 [158/233] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:19:54.455 [159/233] Linking static target lib/librte_hash.a 00:19:54.455 [160/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:19:54.455 [161/233] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:19:54.713 [162/233] Linking static target lib/librte_ethdev.a 00:19:54.713 [163/233] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:19:54.713 [164/233] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:19:54.713 [165/233] Linking static target lib/librte_dmadev.a 00:19:54.713 [166/233] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:19:54.713 [167/233] Linking static target lib/librte_reorder.a 00:19:54.713 [168/233] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:19:54.713 [169/233] Linking static target lib/librte_security.a 00:19:54.971 [170/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:19:54.972 [171/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:19:54.972 [172/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:19:54.972 [173/233] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:19:54.972 [174/233] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:19:54.972 [175/233] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_bsd_pci.c.o 00:19:54.972 [176/233] Linking static target drivers/libtmp_rte_bus_pci.a 00:19:54.972 [177/233] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:19:55.230 [178/233] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:19:55.230 [179/233] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:19:55.230 [180/233] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:19:55.230 [181/233] Generating kernel/freebsd/contigmem with a custom command 00:19:55.230 machine -> /usr/src/sys/amd64/include 00:19:55.230 x86 -> /usr/src/sys/x86/include 00:19:55.230 i386 -> /usr/src/sys/i386/include 00:19:55.230 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/device_if.m -h 00:19:55.230 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/kern/bus_if.m -h 00:19:55.230 awk -f /usr/src/sys/tools/makeobjops.awk /usr/src/sys/dev/pci/pci_if.m -h 00:19:55.230 touch opt_global.h 00:19:55.230 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.contigmem.o -MTcontigmem.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/contigmem/contigmem.c -o contigmem.o 00:19:55.230 ld.lld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o contigmem.ko contigmem.o 00:19:55.230 :> export_syms 00:19:55.230 awk -f /usr/src/sys/conf/kmod_syms.awk contigmem.ko export_syms | xargs -J% objcopy % contigmem.ko 00:19:55.230 objcopy --strip-debug contigmem.ko 00:19:55.230 [182/233] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:19:55.230 [183/233] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:19:55.230 [184/233] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:19:55.230 [185/233] Linking static target drivers/librte_bus_pci.a 00:19:55.230 [186/233] Linking static target drivers/libtmp_rte_bus_vdev.a 00:19:55.489 [187/233] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:19:55.489 [188/233] Linking static target lib/librte_cryptodev.a 00:19:55.489 [189/233] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:19:55.489 [190/233] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:19:55.489 [191/233] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:19:55.489 [192/233] Linking static target drivers/librte_bus_vdev.a 00:19:55.489 [193/233] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:19:55.489 [194/233] Generating kernel/freebsd/nic_uio with a custom command 00:19:55.489 clang -O2 -pipe -include rte_config.h -fno-strict-aliasing -Werror -D_KERNEL -DKLD_MODULE -nostdinc -I/home/vagrant/spdk_repo/spdk/dpdk/build-tmp -I/home/vagrant/spdk_repo/spdk/dpdk/config -include /home/vagrant/spdk_repo/spdk/dpdk/build-tmp/kernel/freebsd/opt_global.h -I. -I/usr/src/sys -I/usr/src/sys/contrib/ck/include -fno-common -fno-omit-frame-pointer -mno-omit-leaf-frame-pointer -fdebug-prefix-map=./machine=/usr/src/sys/amd64/include -fdebug-prefix-map=./x86=/usr/src/sys/x86/include -fdebug-prefix-map=./i386=/usr/src/sys/i386/include -MD -MF.depend.nic_uio.o -MTnic_uio.o -mcmodel=kernel -mno-red-zone -mno-mmx -mno-sse -msoft-float -fno-asynchronous-unwind-tables -ffreestanding -fwrapv -fstack-protector -Wall -Wstrict-prototypes -Wmissing-prototypes -Wpointer-arith -Wcast-qual -Wundef -Wno-pointer-sign -D__printf__=__freebsd_kprintf__ -Wmissing-include-dirs -fdiagnostics-show-option -Wno-unknown-pragmas -Wno-error=tautological-compare -Wno-error=empty-body -Wno-error=parentheses-equality -Wno-error=unused-function -Wno-error=pointer-sign -Wno-error=shift-negative-value -Wno-address-of-packed-member -Wno-format-zero-length -mno-aes -mno-avx -std=gnu99 -c /home/vagrant/spdk_repo/spdk/dpdk/kernel/freebsd/nic_uio/nic_uio.c -o nic_uio.o 00:19:55.489 ld.lld -m elf_x86_64_fbsd -warn-common --build-id=sha1 -T /usr/src/sys/conf/ldscript.kmod.amd64 -r -o nic_uio.ko nic_uio.o 00:19:55.489 :> export_syms 00:19:55.489 awk -f /usr/src/sys/conf/kmod_syms.awk nic_uio.ko export_syms | xargs -J% objcopy % nic_uio.ko 00:19:55.489 objcopy --strip-debug nic_uio.ko 00:19:55.748 [195/233] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:19:56.007 [196/233] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:19:56.007 [197/233] Linking static target drivers/libtmp_rte_mempool_ring.a 00:19:56.265 [198/233] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:19:56.265 [199/233] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:19:56.265 [200/233] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:19:56.265 [201/233] Linking static target drivers/librte_mempool_ring.a 00:19:56.265 [202/233] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:20:00.447 [203/233] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:20:01.384 [204/233] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:20:01.384 [205/233] Linking target lib/librte_eal.so.24.1 00:20:01.641 [206/233] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:20:01.641 [207/233] Linking target lib/librte_pci.so.24.1 00:20:01.641 [208/233] Linking target lib/librte_meter.so.24.1 00:20:01.641 [209/233] Linking target lib/librte_dmadev.so.24.1 00:20:01.641 [210/233] Linking target drivers/librte_bus_vdev.so.24.1 00:20:01.641 [211/233] Linking target lib/librte_ring.so.24.1 00:20:01.641 [212/233] Linking target lib/librte_timer.so.24.1 00:20:01.641 [213/233] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:20:01.641 [214/233] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:20:01.897 [215/233] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:20:01.897 [216/233] Linking target lib/librte_rcu.so.24.1 00:20:01.897 [217/233] Linking target drivers/librte_bus_pci.so.24.1 00:20:01.897 [218/233] Linking target lib/librte_mempool.so.24.1 00:20:01.897 [219/233] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:20:01.897 [220/233] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:20:01.897 [221/233] Linking target drivers/librte_mempool_ring.so.24.1 00:20:01.897 [222/233] Linking target lib/librte_mbuf.so.24.1 00:20:02.155 [223/233] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:20:02.155 [224/233] Linking target lib/librte_net.so.24.1 00:20:02.155 [225/233] Linking target lib/librte_compressdev.so.24.1 00:20:02.155 [226/233] Linking target lib/librte_reorder.so.24.1 00:20:02.155 [227/233] Linking target lib/librte_cryptodev.so.24.1 00:20:02.155 [228/233] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:20:02.155 [229/233] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:20:02.413 [230/233] Linking target lib/librte_cmdline.so.24.1 00:20:02.413 [231/233] Linking target lib/librte_hash.so.24.1 00:20:02.413 [232/233] Linking target lib/librte_security.so.24.1 00:20:02.413 [233/233] Linking target lib/librte_ethdev.so.24.1 00:20:02.413 INFO: autodetecting backend as ninja 00:20:02.413 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:20:03.344 CC lib/ut/ut.o 00:20:03.344 CC lib/log/log.o 00:20:03.344 CC lib/log/log_flags.o 00:20:03.344 CC lib/ut_mock/mock.o 00:20:03.344 CC lib/log/log_deprecated.o 00:20:03.344 LIB libspdk_ut_mock.a 00:20:03.344 LIB libspdk_log.a 00:20:03.344 LIB libspdk_ut.a 00:20:03.344 CC lib/dma/dma.o 00:20:03.344 CC lib/ioat/ioat.o 00:20:03.344 CC lib/util/base64.o 00:20:03.344 CC lib/util/bit_array.o 00:20:03.344 CC lib/util/cpuset.o 00:20:03.344 CC lib/util/crc16.o 00:20:03.344 CXX lib/trace_parser/trace.o 00:20:03.344 CC lib/util/crc32.o 00:20:03.344 CC lib/util/crc32c.o 00:20:03.344 CC lib/util/crc32_ieee.o 00:20:03.602 CC lib/util/crc64.o 00:20:03.602 CC lib/util/dif.o 00:20:03.602 CC lib/util/fd.o 00:20:03.602 CC lib/util/file.o 00:20:03.602 CC lib/util/hexlify.o 00:20:03.602 LIB libspdk_dma.a 00:20:03.602 CC lib/util/iov.o 00:20:03.602 CC lib/util/math.o 00:20:03.602 CC lib/util/pipe.o 00:20:03.602 CC lib/util/strerror_tls.o 00:20:03.602 CC lib/util/string.o 00:20:03.602 CC lib/util/uuid.o 00:20:03.602 CC lib/util/fd_group.o 00:20:03.602 LIB libspdk_ioat.a 00:20:03.602 CC lib/util/xor.o 00:20:03.602 CC lib/util/zipf.o 00:20:04.168 LIB libspdk_util.a 00:20:04.168 CC lib/idxd/idxd.o 00:20:04.168 CC lib/idxd/idxd_user.o 00:20:04.168 CC lib/rdma_utils/rdma_utils.o 00:20:04.168 CC lib/conf/conf.o 00:20:04.168 CC lib/vmd/vmd.o 00:20:04.168 CC lib/vmd/led.o 00:20:04.168 CC lib/env_dpdk/env.o 00:20:04.168 CC lib/rdma_provider/common.o 00:20:04.168 CC lib/json/json_parse.o 00:20:04.168 CC lib/env_dpdk/memory.o 00:20:04.168 CC lib/rdma_provider/rdma_provider_verbs.o 00:20:04.426 LIB libspdk_conf.a 00:20:04.426 CC lib/json/json_util.o 00:20:04.426 LIB libspdk_rdma_utils.a 00:20:04.426 CC lib/json/json_write.o 00:20:04.426 CC lib/env_dpdk/pci.o 00:20:04.426 LIB libspdk_rdma_provider.a 00:20:04.426 CC lib/env_dpdk/init.o 00:20:04.426 CC lib/env_dpdk/threads.o 00:20:04.426 CC lib/env_dpdk/pci_ioat.o 00:20:04.426 LIB libspdk_vmd.a 00:20:04.426 CC lib/env_dpdk/pci_virtio.o 00:20:04.426 CC lib/env_dpdk/pci_vmd.o 00:20:04.426 CC lib/env_dpdk/pci_idxd.o 00:20:04.426 LIB libspdk_idxd.a 00:20:04.426 LIB libspdk_trace_parser.a 00:20:04.426 CC lib/env_dpdk/pci_event.o 00:20:04.684 CC lib/env_dpdk/sigbus_handler.o 00:20:04.684 CC lib/env_dpdk/pci_dpdk.o 00:20:04.684 CC lib/env_dpdk/pci_dpdk_2207.o 00:20:04.684 CC lib/env_dpdk/pci_dpdk_2211.o 00:20:04.684 LIB libspdk_json.a 00:20:04.684 CC lib/jsonrpc/jsonrpc_server.o 00:20:04.684 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:20:04.684 CC lib/jsonrpc/jsonrpc_client.o 00:20:04.684 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:20:04.942 LIB libspdk_jsonrpc.a 00:20:04.942 CC lib/rpc/rpc.o 00:20:05.200 LIB libspdk_rpc.a 00:20:05.200 CC lib/keyring/keyring.o 00:20:05.200 CC lib/keyring/keyring_rpc.o 00:20:05.200 CC lib/trace/trace.o 00:20:05.200 CC lib/trace/trace_flags.o 00:20:05.200 CC lib/trace/trace_rpc.o 00:20:05.200 CC lib/notify/notify.o 00:20:05.200 CC lib/notify/notify_rpc.o 00:20:05.200 LIB libspdk_env_dpdk.a 00:20:05.200 LIB libspdk_notify.a 00:20:05.458 LIB libspdk_keyring.a 00:20:05.458 LIB libspdk_trace.a 00:20:05.458 CC lib/thread/thread.o 00:20:05.458 CC lib/thread/iobuf.o 00:20:05.458 CC lib/sock/sock.o 00:20:05.458 CC lib/sock/sock_rpc.o 00:20:05.716 LIB libspdk_sock.a 00:20:06.037 CC lib/nvme/nvme_ctrlr_cmd.o 00:20:06.037 CC lib/nvme/nvme_fabric.o 00:20:06.037 CC lib/nvme/nvme_ctrlr.o 00:20:06.037 CC lib/nvme/nvme_ns_cmd.o 00:20:06.037 CC lib/nvme/nvme_ns.o 00:20:06.037 CC lib/nvme/nvme_pcie_common.o 00:20:06.037 CC lib/nvme/nvme_pcie.o 00:20:06.037 CC lib/nvme/nvme_qpair.o 00:20:06.037 CC lib/nvme/nvme.o 00:20:06.037 LIB libspdk_thread.a 00:20:06.037 CC lib/nvme/nvme_quirks.o 00:20:06.601 CC lib/accel/accel.o 00:20:06.601 CC lib/blob/blobstore.o 00:20:06.601 CC lib/blob/request.o 00:20:06.601 CC lib/init/json_config.o 00:20:06.601 CC lib/blob/zeroes.o 00:20:06.601 CC lib/init/subsystem.o 00:20:06.601 CC lib/init/subsystem_rpc.o 00:20:06.601 CC lib/accel/accel_rpc.o 00:20:06.601 CC lib/blob/blob_bs_dev.o 00:20:06.601 CC lib/init/rpc.o 00:20:06.601 CC lib/accel/accel_sw.o 00:20:06.601 CC lib/nvme/nvme_transport.o 00:20:06.601 CC lib/nvme/nvme_discovery.o 00:20:06.601 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:20:06.859 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:20:06.859 LIB libspdk_init.a 00:20:06.859 CC lib/nvme/nvme_tcp.o 00:20:06.859 CC lib/nvme/nvme_opal.o 00:20:06.859 CC lib/event/app.o 00:20:06.859 LIB libspdk_accel.a 00:20:07.117 CC lib/event/reactor.o 00:20:07.117 CC lib/bdev/bdev.o 00:20:07.117 CC lib/bdev/bdev_rpc.o 00:20:07.117 CC lib/event/log_rpc.o 00:20:07.117 CC lib/bdev/bdev_zone.o 00:20:07.117 CC lib/nvme/nvme_io_msg.o 00:20:07.117 CC lib/bdev/part.o 00:20:07.374 CC lib/event/app_rpc.o 00:20:07.374 CC lib/bdev/scsi_nvme.o 00:20:07.374 CC lib/event/scheduler_static.o 00:20:07.374 CC lib/nvme/nvme_poll_group.o 00:20:07.375 CC lib/nvme/nvme_zns.o 00:20:07.375 CC lib/nvme/nvme_stubs.o 00:20:07.375 CC lib/nvme/nvme_auth.o 00:20:07.375 CC lib/nvme/nvme_rdma.o 00:20:07.375 LIB libspdk_event.a 00:20:07.940 LIB libspdk_blob.a 00:20:08.198 CC lib/lvol/lvol.o 00:20:08.198 CC lib/blobfs/blobfs.o 00:20:08.198 CC lib/blobfs/tree.o 00:20:08.198 LIB libspdk_nvme.a 00:20:08.455 LIB libspdk_bdev.a 00:20:08.455 CC lib/scsi/dev.o 00:20:08.455 CC lib/scsi/lun.o 00:20:08.455 CC lib/scsi/scsi.o 00:20:08.455 CC lib/scsi/port.o 00:20:08.455 CC lib/scsi/scsi_bdev.o 00:20:08.455 CC lib/scsi/scsi_pr.o 00:20:08.455 CC lib/scsi/scsi_rpc.o 00:20:08.455 LIB libspdk_blobfs.a 00:20:08.455 CC lib/nvmf/ctrlr.o 00:20:08.455 LIB libspdk_lvol.a 00:20:08.455 CC lib/scsi/task.o 00:20:08.455 CC lib/nvmf/ctrlr_discovery.o 00:20:08.455 CC lib/nvmf/ctrlr_bdev.o 00:20:08.456 CC lib/nvmf/subsystem.o 00:20:08.456 CC lib/nvmf/nvmf.o 00:20:08.713 CC lib/nvmf/nvmf_rpc.o 00:20:08.713 CC lib/nvmf/transport.o 00:20:08.713 CC lib/nvmf/tcp.o 00:20:08.713 CC lib/nvmf/stubs.o 00:20:08.713 CC lib/nvmf/mdns_server.o 00:20:08.713 CC lib/nvmf/rdma.o 00:20:08.713 CC lib/nvmf/auth.o 00:20:08.713 LIB libspdk_scsi.a 00:20:08.969 CC lib/iscsi/init_grp.o 00:20:08.969 CC lib/iscsi/conn.o 00:20:08.969 CC lib/iscsi/iscsi.o 00:20:08.969 CC lib/iscsi/md5.o 00:20:08.969 CC lib/iscsi/param.o 00:20:08.969 CC lib/iscsi/portal_grp.o 00:20:08.969 CC lib/iscsi/tgt_node.o 00:20:09.227 CC lib/iscsi/iscsi_subsystem.o 00:20:09.227 CC lib/iscsi/iscsi_rpc.o 00:20:09.227 CC lib/iscsi/task.o 00:20:09.794 LIB libspdk_nvmf.a 00:20:09.794 LIB libspdk_iscsi.a 00:20:10.053 CC module/env_dpdk/env_dpdk_rpc.o 00:20:10.053 CC module/accel/error/accel_error.o 00:20:10.053 CC module/accel/error/accel_error_rpc.o 00:20:10.053 CC module/accel/ioat/accel_ioat.o 00:20:10.053 CC module/accel/iaa/accel_iaa.o 00:20:10.053 CC module/sock/posix/posix.o 00:20:10.053 CC module/accel/dsa/accel_dsa.o 00:20:10.053 CC module/scheduler/dynamic/scheduler_dynamic.o 00:20:10.053 CC module/keyring/file/keyring.o 00:20:10.053 CC module/blob/bdev/blob_bdev.o 00:20:10.053 LIB libspdk_env_dpdk_rpc.a 00:20:10.053 CC module/accel/iaa/accel_iaa_rpc.o 00:20:10.053 CC module/accel/ioat/accel_ioat_rpc.o 00:20:10.053 CC module/keyring/file/keyring_rpc.o 00:20:10.053 LIB libspdk_accel_error.a 00:20:10.053 CC module/accel/dsa/accel_dsa_rpc.o 00:20:10.312 LIB libspdk_scheduler_dynamic.a 00:20:10.312 LIB libspdk_accel_iaa.a 00:20:10.312 LIB libspdk_accel_ioat.a 00:20:10.312 LIB libspdk_keyring_file.a 00:20:10.312 LIB libspdk_accel_dsa.a 00:20:10.312 LIB libspdk_blob_bdev.a 00:20:10.312 CC module/blobfs/bdev/blobfs_bdev.o 00:20:10.312 CC module/bdev/error/vbdev_error.o 00:20:10.312 CC module/bdev/delay/vbdev_delay.o 00:20:10.312 CC module/bdev/gpt/gpt.o 00:20:10.312 CC module/bdev/lvol/vbdev_lvol.o 00:20:10.312 CC module/bdev/malloc/bdev_malloc.o 00:20:10.312 CC module/bdev/nvme/bdev_nvme.o 00:20:10.312 CC module/bdev/passthru/vbdev_passthru.o 00:20:10.312 CC module/bdev/null/bdev_null.o 00:20:10.571 LIB libspdk_sock_posix.a 00:20:10.571 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:20:10.571 CC module/bdev/gpt/vbdev_gpt.o 00:20:10.571 CC module/bdev/nvme/bdev_nvme_rpc.o 00:20:10.571 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:20:10.571 CC module/bdev/error/vbdev_error_rpc.o 00:20:10.571 CC module/bdev/null/bdev_null_rpc.o 00:20:10.571 CC module/bdev/malloc/bdev_malloc_rpc.o 00:20:10.571 CC module/bdev/delay/vbdev_delay_rpc.o 00:20:10.571 LIB libspdk_blobfs_bdev.a 00:20:10.571 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:20:10.571 LIB libspdk_bdev_passthru.a 00:20:10.571 LIB libspdk_bdev_error.a 00:20:10.571 CC module/bdev/nvme/nvme_rpc.o 00:20:10.571 LIB libspdk_bdev_gpt.a 00:20:10.571 CC module/bdev/nvme/bdev_mdns_client.o 00:20:10.571 LIB libspdk_bdev_null.a 00:20:10.571 LIB libspdk_bdev_malloc.a 00:20:10.829 LIB libspdk_bdev_delay.a 00:20:10.829 CC module/bdev/raid/bdev_raid.o 00:20:10.829 CC module/bdev/raid/bdev_raid_rpc.o 00:20:10.829 CC module/bdev/split/vbdev_split.o 00:20:10.829 CC module/bdev/zone_block/vbdev_zone_block.o 00:20:10.829 CC module/bdev/split/vbdev_split_rpc.o 00:20:10.829 CC module/bdev/aio/bdev_aio.o 00:20:10.829 CC module/bdev/raid/bdev_raid_sb.o 00:20:10.829 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:20:10.829 LIB libspdk_bdev_lvol.a 00:20:10.829 CC module/bdev/raid/raid0.o 00:20:10.829 CC module/bdev/aio/bdev_aio_rpc.o 00:20:10.829 CC module/bdev/raid/raid1.o 00:20:10.829 LIB libspdk_bdev_split.a 00:20:10.829 CC module/bdev/raid/concat.o 00:20:10.829 LIB libspdk_bdev_zone_block.a 00:20:10.829 LIB libspdk_bdev_aio.a 00:20:11.087 LIB libspdk_bdev_raid.a 00:20:11.345 LIB libspdk_bdev_nvme.a 00:20:11.604 CC module/event/subsystems/scheduler/scheduler.o 00:20:11.604 CC module/event/subsystems/sock/sock.o 00:20:11.604 CC module/event/subsystems/keyring/keyring.o 00:20:11.604 CC module/event/subsystems/vmd/vmd.o 00:20:11.604 CC module/event/subsystems/vmd/vmd_rpc.o 00:20:11.604 CC module/event/subsystems/iobuf/iobuf.o 00:20:11.604 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:20:11.865 LIB libspdk_event_keyring.a 00:20:11.865 LIB libspdk_event_vmd.a 00:20:11.865 LIB libspdk_event_sock.a 00:20:11.865 LIB libspdk_event_scheduler.a 00:20:11.865 LIB libspdk_event_iobuf.a 00:20:11.865 CC module/event/subsystems/accel/accel.o 00:20:11.865 LIB libspdk_event_accel.a 00:20:12.125 CC module/event/subsystems/bdev/bdev.o 00:20:12.125 LIB libspdk_event_bdev.a 00:20:12.383 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:20:12.383 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:20:12.383 CC module/event/subsystems/scsi/scsi.o 00:20:12.383 LIB libspdk_event_scsi.a 00:20:12.383 LIB libspdk_event_nvmf.a 00:20:12.641 CC module/event/subsystems/iscsi/iscsi.o 00:20:12.641 LIB libspdk_event_iscsi.a 00:20:12.641 CC app/trace_record/trace_record.o 00:20:12.641 CXX app/trace/trace.o 00:20:12.641 TEST_HEADER include/spdk/config.h 00:20:12.641 CXX test/cpp_headers/accel.o 00:20:12.899 CC test/thread/poller_perf/poller_perf.o 00:20:12.899 CC app/nvmf_tgt/nvmf_main.o 00:20:12.899 CC app/iscsi_tgt/iscsi_tgt.o 00:20:12.899 CC examples/util/zipf/zipf.o 00:20:12.899 CC test/env/mem_callbacks/mem_callbacks.o 00:20:12.899 CC test/app/bdev_svc/bdev_svc.o 00:20:12.899 CC test/dma/test_dma/test_dma.o 00:20:12.899 LINK poller_perf 00:20:12.899 LINK nvmf_tgt 00:20:12.899 LINK zipf 00:20:12.899 LINK bdev_svc 00:20:12.900 CXX test/cpp_headers/accel_module.o 00:20:12.900 LINK iscsi_tgt 00:20:12.900 LINK spdk_trace_record 00:20:13.158 LINK test_dma 00:20:13.158 CXX test/cpp_headers/assert.o 00:20:13.158 CXX test/cpp_headers/barrier.o 00:20:13.416 LINK mem_callbacks 00:20:13.416 CC examples/ioat/perf/perf.o 00:20:13.416 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:20:13.416 CXX test/cpp_headers/base64.o 00:20:13.416 LINK ioat_perf 00:20:13.416 CC test/env/vtophys/vtophys.o 00:20:13.675 CXX test/cpp_headers/bdev.o 00:20:13.675 LINK vtophys 00:20:13.675 LINK nvme_fuzz 00:20:13.675 LINK spdk_trace 00:20:13.675 CXX test/cpp_headers/bdev_module.o 00:20:13.933 CXX test/cpp_headers/bdev_zone.o 00:20:13.933 CC examples/ioat/verify/verify.o 00:20:14.191 CXX test/cpp_headers/bit_array.o 00:20:14.191 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:20:14.191 LINK verify 00:20:14.191 CXX test/cpp_headers/bit_pool.o 00:20:14.450 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:20:14.450 CXX test/cpp_headers/blob.o 00:20:14.450 LINK env_dpdk_post_init 00:20:14.450 CXX test/cpp_headers/blob_bdev.o 00:20:14.708 CXX test/cpp_headers/blobfs.o 00:20:14.708 CC examples/vmd/lsvmd/lsvmd.o 00:20:14.966 CXX test/cpp_headers/blobfs_bdev.o 00:20:14.966 LINK lsvmd 00:20:14.966 CXX test/cpp_headers/conf.o 00:20:15.225 CXX test/cpp_headers/config.o 00:20:15.225 CXX test/cpp_headers/cpuset.o 00:20:15.225 LINK iscsi_fuzz 00:20:15.483 CXX test/cpp_headers/crc16.o 00:20:15.483 CXX test/cpp_headers/crc32.o 00:20:15.742 CC test/thread/lock/spdk_lock.o 00:20:15.742 CXX test/cpp_headers/crc64.o 00:20:15.742 CXX test/cpp_headers/dif.o 00:20:16.000 CXX test/cpp_headers/dma.o 00:20:16.000 CXX test/cpp_headers/endian.o 00:20:16.258 LINK spdk_lock 00:20:16.258 CXX test/cpp_headers/env.o 00:20:16.516 CXX test/cpp_headers/env_dpdk.o 00:20:16.516 CXX test/cpp_headers/event.o 00:20:16.775 CXX test/cpp_headers/fd.o 00:20:16.775 CXX test/cpp_headers/fd_group.o 00:20:17.033 CXX test/cpp_headers/file.o 00:20:17.033 CXX test/cpp_headers/ftl.o 00:20:17.291 CXX test/cpp_headers/gpt_spec.o 00:20:17.548 CXX test/cpp_headers/hexlify.o 00:20:17.548 CXX test/cpp_headers/histogram_data.o 00:20:17.805 CXX test/cpp_headers/idxd.o 00:20:17.805 CXX test/cpp_headers/idxd_spec.o 00:20:18.061 CXX test/cpp_headers/init.o 00:20:18.061 CXX test/cpp_headers/ioat.o 00:20:18.318 CXX test/cpp_headers/ioat_spec.o 00:20:18.575 CXX test/cpp_headers/iscsi_spec.o 00:20:18.575 CXX test/cpp_headers/json.o 00:20:18.833 CC test/env/memory/memory_ut.o 00:20:18.833 CXX test/cpp_headers/jsonrpc.o 00:20:18.833 CC examples/vmd/led/led.o 00:20:18.833 CXX test/cpp_headers/keyring.o 00:20:18.833 CC examples/idxd/perf/perf.o 00:20:18.833 LINK led 00:20:19.089 CXX test/cpp_headers/keyring_module.o 00:20:19.089 LINK idxd_perf 00:20:19.089 CXX test/cpp_headers/likely.o 00:20:19.345 CXX test/cpp_headers/log.o 00:20:19.602 CXX test/cpp_headers/lvol.o 00:20:19.602 LINK memory_ut 00:20:19.602 CXX test/cpp_headers/memory.o 00:20:19.860 CXX test/cpp_headers/mmio.o 00:20:19.860 CXX test/cpp_headers/nbd.o 00:20:19.860 CC app/spdk_tgt/spdk_tgt.o 00:20:19.860 CXX test/cpp_headers/notify.o 00:20:20.117 CC test/env/pci/pci_ut.o 00:20:20.117 LINK spdk_tgt 00:20:20.117 CXX test/cpp_headers/nvme.o 00:20:20.117 LINK pci_ut 00:20:20.117 CXX test/cpp_headers/nvme_intel.o 00:20:20.376 CXX test/cpp_headers/nvme_ocssd.o 00:20:20.376 CC test/app/histogram_perf/histogram_perf.o 00:20:20.376 CXX test/cpp_headers/nvme_ocssd_spec.o 00:20:20.376 CC test/app/jsoncat/jsoncat.o 00:20:20.376 LINK histogram_perf 00:20:20.376 LINK jsoncat 00:20:20.634 CXX test/cpp_headers/nvme_spec.o 00:20:20.634 CC app/spdk_lspci/spdk_lspci.o 00:20:20.634 LINK spdk_lspci 00:20:20.634 CXX test/cpp_headers/nvme_zns.o 00:20:20.892 CXX test/cpp_headers/nvmf.o 00:20:21.151 CXX test/cpp_headers/nvmf_cmd.o 00:20:21.151 CC test/rpc_client/rpc_client_test.o 00:20:21.151 LINK rpc_client_test 00:20:21.151 CC examples/thread/thread/thread_ex.o 00:20:21.151 CXX test/cpp_headers/nvmf_fc_spec.o 00:20:21.410 LINK thread 00:20:21.410 CXX test/cpp_headers/nvmf_spec.o 00:20:21.669 CXX test/cpp_headers/nvmf_transport.o 00:20:21.669 CXX test/cpp_headers/opal.o 00:20:21.926 CC test/app/stub/stub.o 00:20:21.926 CXX test/cpp_headers/opal_spec.o 00:20:21.926 LINK stub 00:20:22.183 CXX test/cpp_headers/pci_ids.o 00:20:22.183 CXX test/cpp_headers/pipe.o 00:20:22.445 CXX test/cpp_headers/queue.o 00:20:22.445 CXX test/cpp_headers/reduce.o 00:20:22.445 CXX test/cpp_headers/rpc.o 00:20:22.719 CXX test/cpp_headers/scheduler.o 00:20:22.986 CXX test/cpp_headers/scsi.o 00:20:22.986 CXX test/cpp_headers/scsi_spec.o 00:20:22.986 CC test/event/event_perf/event_perf.o 00:20:23.243 LINK event_perf 00:20:23.243 CXX test/cpp_headers/sock.o 00:20:23.243 CXX test/cpp_headers/stdinc.o 00:20:23.501 CXX test/cpp_headers/string.o 00:20:23.501 CXX test/cpp_headers/thread.o 00:20:23.759 CXX test/cpp_headers/trace.o 00:20:24.017 CXX test/cpp_headers/trace_parser.o 00:20:24.017 CXX test/cpp_headers/tree.o 00:20:24.017 CXX test/cpp_headers/ublk.o 00:20:24.275 CXX test/cpp_headers/util.o 00:20:24.275 CXX test/cpp_headers/uuid.o 00:20:24.533 CXX test/cpp_headers/version.o 00:20:24.533 CXX test/cpp_headers/vfio_user_pci.o 00:20:24.790 CXX test/cpp_headers/vfio_user_spec.o 00:20:24.790 CXX test/cpp_headers/vhost.o 00:20:25.048 CXX test/cpp_headers/vmd.o 00:20:25.048 CC examples/sock/hello_world/hello_sock.o 00:20:25.048 CXX test/cpp_headers/xor.o 00:20:25.306 LINK hello_sock 00:20:25.306 CXX test/cpp_headers/zipf.o 00:20:25.564 CC app/spdk_nvme_perf/perf.o 00:20:25.564 CC test/event/reactor/reactor.o 00:20:25.822 LINK reactor 00:20:25.822 LINK spdk_nvme_perf 00:20:28.350 CC app/spdk_nvme_identify/identify.o 00:20:28.350 CC test/event/reactor_perf/reactor_perf.o 00:20:28.350 LINK reactor_perf 00:20:28.607 CC app/spdk_nvme_discover/discovery_aer.o 00:20:28.607 LINK spdk_nvme_discover 00:20:28.607 LINK spdk_nvme_identify 00:20:31.135 CC examples/nvme/hello_world/hello_world.o 00:20:31.135 LINK hello_world 00:20:31.701 CC examples/nvme/reconnect/reconnect.o 00:20:31.701 LINK reconnect 00:20:32.633 CC app/spdk_top/spdk_top.o 00:20:33.198 LINK spdk_top 00:20:33.198 CC test/accel/dif/dif.o 00:20:33.476 CC examples/nvme/nvme_manage/nvme_manage.o 00:20:33.476 LINK dif 00:20:33.745 LINK nvme_manage 00:20:34.311 CC test/blobfs/mkfs/mkfs.o 00:20:34.311 LINK mkfs 00:20:34.311 CC examples/nvme/arbitration/arbitration.o 00:20:34.569 LINK arbitration 00:20:35.135 gmake[2]: Nothing to be done for 'all'. 00:20:35.135 CC examples/nvme/hotplug/hotplug.o 00:20:35.135 LINK hotplug 00:20:36.068 CC app/fio/nvme/fio_plugin.o 00:20:36.068 fio_plugin.c:1582:29: warning: field 'ruhs' with variable sized type 'struct spdk_nvme_fdp_ruhs' not at the end of a struct or class is a GNU extension [-Wgnu-variable-sized-type-not-at-end] 00:20:36.068 struct spdk_nvme_fdp_ruhs ruhs; 00:20:36.068 ^ 00:20:36.068 1 warning generated. 00:20:36.068 LINK spdk_nvme 00:20:36.633 CC test/nvme/aer/aer.o 00:20:36.633 LINK aer 00:20:38.007 CC examples/nvme/cmb_copy/cmb_copy.o 00:20:38.007 LINK cmb_copy 00:20:39.382 CC app/fio/bdev/fio_plugin.o 00:20:39.382 CC test/nvme/reset/reset.o 00:20:39.640 LINK reset 00:20:39.640 LINK spdk_bdev 00:20:39.899 CC test/nvme/sgl/sgl.o 00:20:40.157 LINK sgl 00:20:40.415 CC examples/nvme/abort/abort.o 00:20:40.673 LINK abort 00:20:41.608 CC examples/accel/perf/accel_perf.o 00:20:41.608 LINK accel_perf 00:20:42.173 CC test/nvme/e2edp/nvme_dp.o 00:20:42.429 LINK nvme_dp 00:20:44.962 CC examples/blob/hello_world/hello_blob.o 00:20:44.962 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:20:44.962 LINK hello_blob 00:20:44.962 LINK pmr_persistence 00:20:45.527 CC examples/blob/cli/blobcli.o 00:20:45.786 LINK blobcli 00:20:46.044 CC test/nvme/overhead/overhead.o 00:20:46.302 LINK overhead 00:20:46.868 CC test/nvme/err_injection/err_injection.o 00:20:46.868 LINK err_injection 00:20:46.868 CC examples/bdev/hello_world/hello_bdev.o 00:20:46.868 LINK hello_bdev 00:20:47.453 CC test/nvme/startup/startup.o 00:20:47.453 LINK startup 00:20:49.363 CC examples/bdev/bdevperf/bdevperf.o 00:20:49.927 LINK bdevperf 00:20:50.861 CC test/nvme/reserve/reserve.o 00:20:50.861 LINK reserve 00:20:51.429 CC test/nvme/simple_copy/simple_copy.o 00:20:51.429 CC test/nvme/connect_stress/connect_stress.o 00:20:51.687 LINK simple_copy 00:20:51.687 LINK connect_stress 00:20:54.215 CC test/nvme/boot_partition/boot_partition.o 00:20:54.215 LINK boot_partition 00:20:54.215 CC test/nvme/compliance/nvme_compliance.o 00:20:54.780 LINK nvme_compliance 00:20:55.712 CC test/bdev/bdevio/bdevio.o 00:20:55.712 LINK bdevio 00:20:55.969 CC test/nvme/fused_ordering/fused_ordering.o 00:20:55.969 LINK fused_ordering 00:20:55.969 CC test/nvme/doorbell_aers/doorbell_aers.o 00:20:56.226 LINK doorbell_aers 00:20:58.756 CC test/nvme/fdp/fdp.o 00:20:58.756 LINK fdp 00:21:10.960 CC examples/nvmf/nvmf/nvmf.o 00:21:10.960 LINK nvmf 00:21:32.881 21:20:41 -- spdk/autopackage.sh@44 -- $ gmake -j10 clean 00:21:32.881 gmake[1]: Nothing to be done for 'clean'. 00:21:32.881 ps: stdin: not a terminal 00:21:33.139 gmake[2]: Nothing to be done for 'clean'. 00:21:33.396 21:20:44 -- spdk/autopackage.sh@46 -- $ timing_exit build_release 00:21:33.396 21:20:44 -- common/autotest_common.sh@728 -- $ xtrace_disable 00:21:33.396 21:20:44 -- common/autotest_common.sh@10 -- $ set +x 00:21:33.396 21:20:44 -- spdk/autopackage.sh@48 -- $ timing_finish 00:21:33.396 21:20:44 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:21:33.396 21:20:44 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:21:33.396 21:20:44 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:21:33.396 21:20:44 -- pm/common@29 -- $ signal_monitor_resources TERM 00:21:33.396 21:20:44 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:21:33.396 21:20:44 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:21:33.396 21:20:44 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:21:33.654 21:20:44 -- pm/common@44 -- $ pid=69387 00:21:33.654 21:20:44 -- pm/common@50 -- $ kill -TERM 69387 00:21:33.654 + [[ -n 1298 ]] 00:21:33.654 + sudo kill 1298 00:21:33.666 [Pipeline] } 00:21:33.687 [Pipeline] // timeout 00:21:33.695 [Pipeline] } 00:21:33.714 [Pipeline] // stage 00:21:33.720 [Pipeline] } 00:21:33.742 [Pipeline] // catchError 00:21:33.753 [Pipeline] stage 00:21:33.756 [Pipeline] { (Stop VM) 00:21:33.774 [Pipeline] sh 00:21:34.058 + vagrant halt 00:21:37.335 ==> default: Halting domain... 00:22:03.909 [Pipeline] sh 00:22:04.189 + vagrant destroy -f 00:22:07.475 ==> default: Removing domain... 00:22:07.488 [Pipeline] sh 00:22:07.768 + mv output /var/jenkins/workspace/freebsd-vg-autotest/output 00:22:07.778 [Pipeline] } 00:22:07.797 [Pipeline] // stage 00:22:07.803 [Pipeline] } 00:22:07.826 [Pipeline] // dir 00:22:07.834 [Pipeline] } 00:22:07.860 [Pipeline] // wrap 00:22:07.869 [Pipeline] } 00:22:07.890 [Pipeline] // catchError 00:22:07.904 [Pipeline] stage 00:22:07.907 [Pipeline] { (Epilogue) 00:22:07.927 [Pipeline] sh 00:22:08.208 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:22:08.224 [Pipeline] catchError 00:22:08.227 [Pipeline] { 00:22:08.244 [Pipeline] sh 00:22:08.525 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:22:08.783 Artifacts sizes are good 00:22:08.793 [Pipeline] } 00:22:08.816 [Pipeline] // catchError 00:22:08.830 [Pipeline] archiveArtifacts 00:22:08.838 Archiving artifacts 00:22:08.882 [Pipeline] cleanWs 00:22:08.897 [WS-CLEANUP] Deleting project workspace... 00:22:08.897 [WS-CLEANUP] Deferred wipeout is used... 00:22:08.903 [WS-CLEANUP] done 00:22:08.910 [Pipeline] } 00:22:08.933 [Pipeline] // stage 00:22:08.941 [Pipeline] } 00:22:08.965 [Pipeline] // node 00:22:08.973 [Pipeline] End of Pipeline 00:22:09.027 Finished: SUCCESS